Jun 25 16:23:54.069078 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:23:54.069122 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:54.069138 kernel: BIOS-provided physical RAM map: Jun 25 16:23:54.069150 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jun 25 16:23:54.069162 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jun 25 16:23:54.069174 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jun 25 16:23:54.069189 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jun 25 16:23:54.069206 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jun 25 16:23:54.069227 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jun 25 16:23:54.069241 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jun 25 16:23:54.069255 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jun 25 16:23:54.069269 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jun 25 16:23:54.069280 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jun 25 16:23:54.069291 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jun 25 16:23:54.069318 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jun 25 16:23:54.069333 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jun 25 16:23:54.069348 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jun 25 16:23:54.069364 kernel: NX (Execute Disable) protection: active Jun 25 16:23:54.069379 kernel: efi: EFI v2.70 by EDK II Jun 25 16:23:54.069394 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jun 25 16:23:54.069408 kernel: SMBIOS 2.4 present. Jun 25 16:23:54.069424 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jun 25 16:23:54.069438 kernel: Hypervisor detected: KVM Jun 25 16:23:54.069453 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:23:54.069473 kernel: kvm-clock: using sched offset of 6047216437 cycles Jun 25 16:23:54.069489 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:23:54.069505 kernel: tsc: Detected 2299.998 MHz processor Jun 25 16:23:54.069520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:23:54.069536 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:23:54.069551 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jun 25 16:23:54.069566 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:23:54.069581 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jun 25 16:23:54.069597 kernel: Using GB pages for direct mapping Jun 25 16:23:54.069616 kernel: Secure boot disabled Jun 25 16:23:54.069631 kernel: ACPI: Early table checksum verification disabled Jun 25 16:23:54.069646 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jun 25 16:23:54.069662 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jun 25 16:23:54.069677 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jun 25 16:23:54.069693 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jun 25 16:23:54.069708 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jun 25 16:23:54.069731 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jun 25 16:23:54.069751 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jun 25 16:23:54.069767 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jun 25 16:23:54.069784 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jun 25 16:23:54.069801 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jun 25 16:23:54.069817 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jun 25 16:23:54.069834 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jun 25 16:23:54.069854 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jun 25 16:23:54.069871 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jun 25 16:23:54.069887 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jun 25 16:23:54.069904 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jun 25 16:23:54.069921 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jun 25 16:23:54.069937 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jun 25 16:23:54.069953 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jun 25 16:23:54.069970 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jun 25 16:23:54.069986 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:23:54.070007 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:23:54.070045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:23:54.070062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jun 25 16:23:54.070078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jun 25 16:23:54.070095 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jun 25 16:23:54.070112 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jun 25 16:23:54.070129 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jun 25 16:23:54.070146 kernel: Zone ranges: Jun 25 16:23:54.070162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:23:54.070184 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 25 16:23:54.070201 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jun 25 16:23:54.070218 kernel: Movable zone start for each node Jun 25 16:23:54.070235 kernel: Early memory node ranges Jun 25 16:23:54.070251 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jun 25 16:23:54.070268 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jun 25 16:23:54.070285 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jun 25 16:23:54.070308 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jun 25 16:23:54.070325 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jun 25 16:23:54.070345 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jun 25 16:23:54.070362 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:23:54.070379 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jun 25 16:23:54.070395 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jun 25 16:23:54.070412 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 25 16:23:54.070429 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jun 25 16:23:54.070446 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 16:23:54.070462 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:23:54.070479 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:23:54.070499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:23:54.070516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:23:54.070533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:23:54.070550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:23:54.070564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:23:54.070578 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:23:54.070594 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jun 25 16:23:54.070611 kernel: Booting paravirtualized kernel on KVM Jun 25 16:23:54.070627 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:23:54.070649 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:23:54.070664 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:23:54.070680 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:23:54.070696 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:23:54.070711 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:23:54.070728 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:23:54.070744 kernel: Fallback order for Node 0: 0 Jun 25 16:23:54.070760 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jun 25 16:23:54.070776 kernel: Policy zone: Normal Jun 25 16:23:54.070799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:54.070816 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:23:54.070833 kernel: random: crng init done Jun 25 16:23:54.070849 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 25 16:23:54.070865 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:23:54.070880 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:23:54.070895 kernel: software IO TLB: area num 2. Jun 25 16:23:54.070912 kernel: Memory: 7510708K/7860584K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 349616K reserved, 0K cma-reserved) Jun 25 16:23:54.070934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:23:54.070951 kernel: Kernel/User page tables isolation: enabled Jun 25 16:23:54.070968 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:23:54.070984 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:23:54.071008 kernel: Dynamic Preempt: voluntary Jun 25 16:23:54.075085 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:23:54.075114 kernel: rcu: RCU event tracing is enabled. Jun 25 16:23:54.075271 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:23:54.075292 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:23:54.075341 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:23:54.075359 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:23:54.075495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:23:54.075517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:23:54.075535 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:23:54.075553 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:23:54.075570 kernel: Console: colour dummy device 80x25 Jun 25 16:23:54.075707 kernel: printk: console [ttyS0] enabled Jun 25 16:23:54.075726 kernel: ACPI: Core revision 20220331 Jun 25 16:23:54.075749 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:23:54.075767 kernel: x2apic enabled Jun 25 16:23:54.075785 kernel: Switched APIC routing to physical x2apic. Jun 25 16:23:54.075871 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jun 25 16:23:54.075890 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 25 16:23:54.075908 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jun 25 16:23:54.075927 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jun 25 16:23:54.075945 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jun 25 16:23:54.075968 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:23:54.075986 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jun 25 16:23:54.076004 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jun 25 16:23:54.080031 kernel: Spectre V2 : Mitigation: IBRS Jun 25 16:23:54.080057 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:23:54.080075 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:23:54.080093 kernel: RETBleed: Mitigation: IBRS Jun 25 16:23:54.080110 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:23:54.080127 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jun 25 16:23:54.080153 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:23:54.080170 kernel: MDS: Mitigation: Clear CPU buffers Jun 25 16:23:54.080187 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:23:54.080205 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:23:54.080222 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:23:54.080239 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:23:54.080256 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:23:54.080274 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 25 16:23:54.080295 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:23:54.080320 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:23:54.080337 kernel: LSM: Security Framework initializing Jun 25 16:23:54.080354 kernel: SELinux: Initializing. Jun 25 16:23:54.080371 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:23:54.080389 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:23:54.080406 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jun 25 16:23:54.080424 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:54.080441 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:23:54.080462 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:54.080479 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:23:54.080497 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:23:54.080513 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:23:54.080530 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jun 25 16:23:54.080548 kernel: signal: max sigframe size: 1776 Jun 25 16:23:54.080565 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:23:54.080583 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:23:54.080600 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:23:54.080617 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:23:54.080639 kernel: x86: Booting SMP configuration: Jun 25 16:23:54.080656 kernel: .... node #0, CPUs: #1 Jun 25 16:23:54.080675 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 16:23:54.080693 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 16:23:54.080710 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:23:54.080728 kernel: smpboot: Max logical packages: 1 Jun 25 16:23:54.080745 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jun 25 16:23:54.080762 kernel: devtmpfs: initialized Jun 25 16:23:54.080779 kernel: x86/mm: Memory block size: 128MB Jun 25 16:23:54.080801 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jun 25 16:23:54.080818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:23:54.080836 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:23:54.080853 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:23:54.080871 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:23:54.080888 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:23:54.080905 kernel: audit: type=2000 audit(1719332632.462:1): state=initialized audit_enabled=0 res=1 Jun 25 16:23:54.080922 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:23:54.080939 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:23:54.080961 kernel: cpuidle: using governor menu Jun 25 16:23:54.080978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:23:54.081002 kernel: dca service started, version 1.12.1 Jun 25 16:23:54.081031 kernel: PCI: Using configuration type 1 for base access Jun 25 16:23:54.081049 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:23:54.081067 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:23:54.081084 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:23:54.081101 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:23:54.081118 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:23:54.081140 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:23:54.081157 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:23:54.081175 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:23:54.081192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:23:54.081209 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 16:23:54.081226 kernel: ACPI: Interpreter enabled Jun 25 16:23:54.081243 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:23:54.081261 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:23:54.081278 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:23:54.081299 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 25 16:23:54.081323 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 16:23:54.081341 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:23:54.081605 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:23:54.081771 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:23:54.081926 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:23:54.081947 kernel: PCI host bridge to bus 0000:00 Jun 25 16:23:54.082126 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:23:54.082272 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:23:54.082424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:23:54.082566 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jun 25 16:23:54.082716 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:23:54.082889 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:23:54.083124 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jun 25 16:23:54.083299 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:23:54.083464 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 16:23:54.083641 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jun 25 16:23:54.083820 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jun 25 16:23:54.083988 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jun 25 16:23:54.084198 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:23:54.084378 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jun 25 16:23:54.084539 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jun 25 16:23:54.084709 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:23:54.084871 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 16:23:54.085346 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jun 25 16:23:54.085491 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:23:54.085510 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:23:54.085534 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:23:54.085551 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:23:54.085568 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:23:54.085703 kernel: iommu: Default domain type: Translated Jun 25 16:23:54.085721 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:23:54.085738 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:23:54.085755 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:23:54.085773 kernel: PTP clock support registered Jun 25 16:23:54.085902 kernel: Registered efivars operations Jun 25 16:23:54.085923 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:23:54.085940 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:23:54.085957 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jun 25 16:23:54.085974 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jun 25 16:23:54.085991 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jun 25 16:23:54.086132 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jun 25 16:23:54.086149 kernel: vgaarb: loaded Jun 25 16:23:54.086166 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:23:54.086184 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:23:54.086205 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:23:54.086352 kernel: pnp: PnP ACPI init Jun 25 16:23:54.086391 kernel: pnp: PnP ACPI: found 7 devices Jun 25 16:23:54.086425 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:23:54.086442 kernel: NET: Registered PF_INET protocol family Jun 25 16:23:54.086459 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:23:54.086477 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 25 16:23:54.086495 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:23:54.086511 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:23:54.086529 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 25 16:23:54.086545 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 25 16:23:54.086563 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:23:54.086580 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 25 16:23:54.086596 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:23:54.086612 kernel: NET: Registered PF_XDP protocol family Jun 25 16:23:54.086852 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:23:54.087049 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:23:54.090065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:23:54.090229 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jun 25 16:23:54.090407 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:23:54.090434 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:23:54.090453 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 25 16:23:54.090472 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jun 25 16:23:54.090489 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:23:54.090508 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jun 25 16:23:54.090533 kernel: clocksource: Switched to clocksource tsc Jun 25 16:23:54.090558 kernel: Initialise system trusted keyrings Jun 25 16:23:54.090576 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 25 16:23:54.090595 kernel: Key type asymmetric registered Jun 25 16:23:54.090613 kernel: Asymmetric key parser 'x509' registered Jun 25 16:23:54.090631 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:23:54.090649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:23:54.090667 kernel: io scheduler mq-deadline registered Jun 25 16:23:54.090684 kernel: io scheduler kyber registered Jun 25 16:23:54.090707 kernel: io scheduler bfq registered Jun 25 16:23:54.090725 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:23:54.090744 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:23:54.090919 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jun 25 16:23:54.090942 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 16:23:54.091124 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jun 25 16:23:54.091147 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:23:54.091302 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jun 25 16:23:54.091329 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:23:54.091346 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:23:54.091364 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 25 16:23:54.091380 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jun 25 16:23:54.091398 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jun 25 16:23:54.091568 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jun 25 16:23:54.091592 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:23:54.091608 kernel: i8042: Warning: Keylock active Jun 25 16:23:54.091626 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:23:54.091647 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:23:54.091815 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 16:23:54.091962 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 16:23:54.092192 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T16:23:53 UTC (1719332633) Jun 25 16:23:54.092346 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 16:23:54.092369 kernel: intel_pstate: CPU model not supported Jun 25 16:23:54.092388 kernel: pstore: Registered efi as persistent store backend Jun 25 16:23:54.092414 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:23:54.092432 kernel: Segment Routing with IPv6 Jun 25 16:23:54.092450 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:23:54.092468 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:23:54.092486 kernel: Key type dns_resolver registered Jun 25 16:23:54.092504 kernel: IPI shorthand broadcast: enabled Jun 25 16:23:54.092522 kernel: sched_clock: Marking stable (940082575, 136783474)->(1104780073, -27914024) Jun 25 16:23:54.092540 kernel: registered taskstats version 1 Jun 25 16:23:54.092565 kernel: Loading compiled-in X.509 certificates Jun 25 16:23:54.092581 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:23:54.092604 kernel: Key type .fscrypt registered Jun 25 16:23:54.092621 kernel: Key type fscrypt-provisioning registered Jun 25 16:23:54.092639 kernel: pstore: Using crash dump compression: deflate Jun 25 16:23:54.092656 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:23:54.092673 kernel: ima: No architecture policies found Jun 25 16:23:54.092690 kernel: clk: Disabling unused clocks Jun 25 16:23:54.092707 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:23:54.092724 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:23:54.092745 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:23:54.092762 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:23:54.092779 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:23:54.092796 kernel: Run /init as init process Jun 25 16:23:54.092813 kernel: with arguments: Jun 25 16:23:54.092830 kernel: /init Jun 25 16:23:54.092847 kernel: with environment: Jun 25 16:23:54.092863 kernel: HOME=/ Jun 25 16:23:54.092879 kernel: TERM=linux Jun 25 16:23:54.092901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:23:54.092921 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:23:54.092943 systemd[1]: Detected virtualization kvm. Jun 25 16:23:54.092961 systemd[1]: Detected architecture x86-64. Jun 25 16:23:54.092978 systemd[1]: Running in initrd. Jun 25 16:23:54.092996 systemd[1]: No hostname configured, using default hostname. Jun 25 16:23:54.093012 systemd[1]: Hostname set to . Jun 25 16:23:54.093057 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:23:54.093082 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:23:54.093132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:23:54.093151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:23:54.093168 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:23:54.093185 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:23:54.093203 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:23:54.093221 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:23:54.093265 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:23:54.093283 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:23:54.093302 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:23:54.093321 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:23:54.093340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:23:54.093363 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:23:54.093382 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:23:54.093401 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:23:54.093419 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:23:54.093438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:23:54.093457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:23:54.093475 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:23:54.093494 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:23:54.093512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:23:54.093536 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:23:54.093562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:23:54.093580 kernel: audit: type=1130 audit(1719332634.070:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.093598 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:23:54.093615 kernel: audit: type=1130 audit(1719332634.075:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.093634 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:23:54.093666 kernel: audit: type=1130 audit(1719332634.085:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.093688 systemd-journald[185]: Journal started Jun 25 16:23:54.093789 systemd-journald[185]: Runtime Journal (/run/log/journal/229d986b5689ad1d167f526309bfcfa4) is 8.0M, max 148.7M, 140.7M free. Jun 25 16:23:54.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.089920 systemd-modules-load[186]: Inserted module 'overlay' Jun 25 16:23:54.098067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:23:54.115040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:23:54.123411 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:23:54.123507 kernel: audit: type=1130 audit(1719332634.119:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.136043 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:23:54.137324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:23:54.137813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:23:54.142574 kernel: audit: type=1130 audit(1719332634.136:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.149050 kernel: Bridge firewalling registered Jun 25 16:23:54.146468 systemd-modules-load[186]: Inserted module 'br_netfilter' Jun 25 16:23:54.168209 kernel: audit: type=1130 audit(1719332634.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.155551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:23:54.164566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:23:54.185188 kernel: SCSI subsystem initialized Jun 25 16:23:54.185235 kernel: audit: type=1130 audit(1719332634.175:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.184403 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:23:54.194000 audit: BPF prog-id=6 op=LOAD Jun 25 16:23:54.198871 kernel: audit: type=1334 audit(1719332634.194:9): prog-id=6 op=LOAD Jun 25 16:23:54.198934 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:23:54.198961 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:23:54.198985 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:23:54.196328 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:23:54.204053 systemd-modules-load[186]: Inserted module 'dm_multipath' Jun 25 16:23:54.209433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:23:54.221669 dracut-cmdline[204]: dracut-dracut-053 Jun 25 16:23:54.232178 kernel: audit: type=1130 audit(1719332634.224:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.232302 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:23:54.233681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:23:54.257514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:23:54.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.270920 systemd-resolved[208]: Positive Trust Anchors: Jun 25 16:23:54.270946 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:23:54.270999 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:23:54.276384 systemd-resolved[208]: Defaulting to hostname 'linux'. Jun 25 16:23:54.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.278287 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:23:54.292357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:23:54.344073 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:23:54.359078 kernel: iscsi: registered transport (tcp) Jun 25 16:23:54.386336 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:23:54.386445 kernel: QLogic iSCSI HBA Driver Jun 25 16:23:54.440730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:23:54.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.448437 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:23:54.527094 kernel: raid6: avx2x4 gen() 22623 MB/s Jun 25 16:23:54.544073 kernel: raid6: avx2x2 gen() 24137 MB/s Jun 25 16:23:54.562041 kernel: raid6: avx2x1 gen() 21452 MB/s Jun 25 16:23:54.562099 kernel: raid6: using algorithm avx2x2 gen() 24137 MB/s Jun 25 16:23:54.579538 kernel: raid6: .... xor() 17891 MB/s, rmw enabled Jun 25 16:23:54.579628 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:23:54.584062 kernel: xor: automatically using best checksumming function avx Jun 25 16:23:54.752067 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:23:54.765379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:23:54.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.765000 audit: BPF prog-id=7 op=LOAD Jun 25 16:23:54.765000 audit: BPF prog-id=8 op=LOAD Jun 25 16:23:54.769365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:23:54.795893 systemd-udevd[386]: Using default interface naming scheme 'v252'. Jun 25 16:23:54.803682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:23:54.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.812309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:23:54.834342 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Jun 25 16:23:54.876368 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:23:54.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:54.884377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:23:54.951576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:23:54.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:55.034482 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:23:55.049054 kernel: scsi host0: Virtio SCSI HBA Jun 25 16:23:55.062053 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jun 25 16:23:55.138214 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:23:55.138299 kernel: AES CTR mode by8 optimization enabled Jun 25 16:23:55.174173 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jun 25 16:23:55.188251 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jun 25 16:23:55.188532 kernel: sd 0:0:1:0: [sda] Write Protect is off Jun 25 16:23:55.188758 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jun 25 16:23:55.188976 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 25 16:23:55.189236 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:23:55.189262 kernel: GPT:17805311 != 25165823 Jun 25 16:23:55.189285 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:23:55.189307 kernel: GPT:17805311 != 25165823 Jun 25 16:23:55.189326 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:23:55.189346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:23:55.189374 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jun 25 16:23:55.251051 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Jun 25 16:23:55.254061 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jun 25 16:23:55.269206 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (441) Jun 25 16:23:55.276013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jun 25 16:23:55.285575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 25 16:23:55.290558 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jun 25 16:23:55.296178 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jun 25 16:23:55.312842 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:23:55.338093 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:23:55.338478 disk-uuid[520]: Primary Header is updated. Jun 25 16:23:55.338478 disk-uuid[520]: Secondary Entries is updated. Jun 25 16:23:55.338478 disk-uuid[520]: Secondary Header is updated. Jun 25 16:23:56.371219 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 16:23:56.371314 disk-uuid[521]: The operation has completed successfully. Jun 25 16:23:56.449409 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:23:56.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.449576 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:23:56.463411 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:23:56.472386 sh[538]: Success Jun 25 16:23:56.486056 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:23:56.586260 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:23:56.589678 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:23:56.599560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:23:56.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.616181 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:23:56.616259 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:56.616285 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:23:56.617455 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:23:56.618407 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:23:56.649087 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:23:56.654375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:23:56.662607 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:23:56.671446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:23:56.686736 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:56.686834 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:56.686859 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:23:56.710794 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:23:56.716217 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:56.729175 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:23:56.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.739380 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:23:56.862465 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:23:56.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.878000 audit: BPF prog-id=9 op=LOAD Jun 25 16:23:56.884329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:23:56.906420 ignition[657]: Ignition 2.15.0 Jun 25 16:23:56.907073 ignition[657]: Stage: fetch-offline Jun 25 16:23:56.909782 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:23:56.907526 ignition[657]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:56.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.923897 systemd-networkd[723]: lo: Link UP Jun 25 16:23:56.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.907549 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:56.923901 systemd-networkd[723]: lo: Gained carrier Jun 25 16:23:56.907755 ignition[657]: parsed url from cmdline: "" Jun 25 16:23:56.924625 systemd-networkd[723]: Enumeration completed Jun 25 16:23:56.907762 ignition[657]: no config URL provided Jun 25 16:23:56.924757 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:23:56.907777 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:23:56.925225 systemd-networkd[723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:56.907793 ignition[657]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:23:56.925233 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:23:56.907806 ignition[657]: failed to fetch config: resource requires networking Jun 25 16:23:56.927978 systemd-networkd[723]: eth0: Link UP Jun 25 16:23:56.908230 ignition[657]: Ignition finished successfully Jun 25 16:23:56.927985 systemd-networkd[723]: eth0: Gained carrier Jun 25 16:23:56.976974 ignition[727]: Ignition 2.15.0 Jun 25 16:23:56.927997 systemd-networkd[723]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:23:56.976984 ignition[727]: Stage: fetch Jun 25 16:23:56.932305 systemd[1]: Reached target network.target - Network. Jun 25 16:23:56.977210 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:57.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.942146 systemd-networkd[723]: eth0: DHCPv4 address 10.128.0.51/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 25 16:23:57.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.977232 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:56.943326 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:23:56.977417 ignition[727]: parsed url from cmdline: "" Jun 25 16:23:56.957718 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:23:56.977424 ignition[727]: no config URL provided Jun 25 16:23:56.991846 unknown[727]: fetched base config from "system" Jun 25 16:23:56.977432 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:23:56.991861 unknown[727]: fetched base config from "system" Jun 25 16:23:57.059179 iscsid[742]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:23:57.059179 iscsid[742]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 16:23:57.059179 iscsid[742]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:23:57.059179 iscsid[742]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:23:57.059179 iscsid[742]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:23:57.059179 iscsid[742]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:23:57.059179 iscsid[742]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:23:57.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.977444 ignition[727]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:23:56.991870 unknown[727]: fetched user config from "gcp" Jun 25 16:23:56.977476 ignition[727]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jun 25 16:23:57.006606 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:23:57.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:56.984484 ignition[727]: GET result: OK Jun 25 16:23:57.015671 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:23:56.984650 ignition[727]: parsing config with SHA512: 6e60e1658f394bbe6d44a6ac2405689aa41949a737bae017d8332f960f71eee8730c4b37ad688c970b36ad687cde0873d142d3831bde77fd2f4ab57a6a154f3f Jun 25 16:23:57.029271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:23:56.996946 ignition[727]: fetch: fetch complete Jun 25 16:23:57.046560 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:23:56.996958 ignition[727]: fetch: fetch passed Jun 25 16:23:57.062568 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:23:56.997593 ignition[727]: Ignition finished successfully Jun 25 16:23:57.066715 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:23:57.056818 ignition[736]: Ignition 2.15.0 Jun 25 16:23:57.079347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:23:57.056831 ignition[736]: Stage: kargs Jun 25 16:23:57.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.095620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:23:57.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.056974 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:57.109056 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:23:57.056986 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:57.121083 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:23:57.058190 ignition[736]: kargs: kargs passed Jun 25 16:23:57.129213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:23:57.058253 ignition[736]: Ignition finished successfully Jun 25 16:23:57.134193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:23:57.117749 ignition[749]: Ignition 2.15.0 Jun 25 16:23:57.158439 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:23:57.117848 ignition[749]: Stage: disks Jun 25 16:23:57.175929 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:23:57.118907 ignition[749]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:57.179780 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:23:57.118927 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:57.188546 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:23:57.120840 ignition[749]: disks: disks passed Jun 25 16:23:57.196324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:23:57.120936 ignition[749]: Ignition finished successfully Jun 25 16:23:57.204345 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:23:57.208385 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:23:57.213377 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:23:57.226406 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:23:57.270642 systemd-fsck[765]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 25 16:23:57.274412 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:23:57.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.284281 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:23:57.390480 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:23:57.391532 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:23:57.392150 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:23:57.407277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:23:57.414431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:23:57.424678 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:23:57.424769 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:23:57.436214 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (771) Jun 25 16:23:57.436257 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:57.436279 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:57.436473 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:23:57.424815 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:23:57.432214 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:23:57.448395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:23:57.454952 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:23:57.562382 initrd-setup-root[795]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:23:57.570320 initrd-setup-root[802]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:23:57.578448 initrd-setup-root[809]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:23:57.585639 initrd-setup-root[816]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:23:57.728836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:23:57.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.734322 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:23:57.740714 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:23:57.748124 kernel: BTRFS info (device sda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:57.752373 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:23:57.788126 ignition[882]: INFO : Ignition 2.15.0 Jun 25 16:23:57.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.788114 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:23:57.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:23:57.804299 ignition[882]: INFO : Stage: mount Jun 25 16:23:57.804299 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:57.804299 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:57.804299 ignition[882]: INFO : mount: mount passed Jun 25 16:23:57.804299 ignition[882]: INFO : Ignition finished successfully Jun 25 16:23:57.792924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:23:57.805447 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:23:57.829621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:23:57.846191 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (892) Jun 25 16:23:57.849419 kernel: BTRFS info (device sda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:23:57.849486 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:23:57.849511 kernel: BTRFS info (device sda6): using free space tree Jun 25 16:23:57.857497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:23:57.881982 ignition[910]: INFO : Ignition 2.15.0 Jun 25 16:23:57.886202 ignition[910]: INFO : Stage: files Jun 25 16:23:57.886202 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:23:57.886202 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:23:57.886202 ignition[910]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:23:57.902198 ignition[910]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:23:57.902198 ignition[910]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:23:57.902198 ignition[910]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:23:57.902198 ignition[910]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:23:57.902198 ignition[910]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:23:57.902198 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:23:57.902198 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 16:23:57.902198 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:23:57.902198 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:23:57.893062 unknown[910]: wrote ssh authorized keys file for user: core Jun 25 16:23:58.386321 systemd-networkd[723]: eth0: Gained IPv6LL Jun 25 16:24:05.182704 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 16:24:05.327577 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:05.345258 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:24:06.636804 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 16:24:07.059076 ignition[910]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:24:07.077183 ignition[910]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:24:07.077183 ignition[910]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:24:07.077183 ignition[910]: INFO : files: files passed Jun 25 16:24:07.077183 ignition[910]: INFO : Ignition finished successfully Jun 25 16:24:07.411216 kernel: kauditd_printk_skb: 28 callbacks suppressed Jun 25 16:24:07.411294 kernel: audit: type=1130 audit(1719332647.096:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.411323 kernel: audit: type=1130 audit(1719332647.200:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.411344 kernel: audit: type=1131 audit(1719332647.200:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.411367 kernel: audit: type=1130 audit(1719332647.290:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.063922 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:24:07.126447 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:24:07.159327 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:24:07.492313 kernel: audit: type=1130 audit(1719332647.439:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.492404 kernel: audit: type=1131 audit(1719332647.439:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.183707 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:24:07.510259 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:07.510259 initrd-setup-root-after-ignition[936]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:07.183824 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:24:07.568254 initrd-setup-root-after-ignition[940]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:24:07.612374 kernel: audit: type=1130 audit(1719332647.577:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.201789 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:24:07.291711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:24:07.352443 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:24:07.426284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:24:07.426421 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:24:07.732287 kernel: audit: type=1131 audit(1719332647.700:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.440750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:24:07.502403 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:24:07.520507 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:24:07.527561 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:24:07.558826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:24:07.605614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:24:07.630351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:24:07.644448 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:24:07.664767 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:24:07.683548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:24:07.683773 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:24:07.701831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:24:07.742588 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:24:07.999302 kernel: audit: type=1131 audit(1719332647.969:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.760532 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:24:07.780537 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:24:08.057233 kernel: audit: type=1131 audit(1719332648.018:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.813541 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:24:08.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.834542 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:24:08.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.855539 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:24:07.875526 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:24:07.895521 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:24:08.128374 ignition[954]: INFO : Ignition 2.15.0 Jun 25 16:24:08.128374 ignition[954]: INFO : Stage: umount Jun 25 16:24:08.128374 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:24:08.128374 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jun 25 16:24:08.128374 ignition[954]: INFO : umount: umount passed Jun 25 16:24:08.128374 ignition[954]: INFO : Ignition finished successfully Jun 25 16:24:08.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.913492 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:24:08.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.242480 iscsid[742]: iscsid shutting down. Jun 25 16:24:08.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.931496 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:24:08.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.950438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:24:08.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.950665 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:24:08.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:07.970756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:24:08.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.009500 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:24:08.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.009733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:24:08.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.019848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:24:08.020128 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:24:08.067627 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:24:08.067825 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:24:08.094793 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:24:08.136689 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:24:08.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.150204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:24:08.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.150509 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:24:08.175714 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:24:08.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.189388 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:24:08.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.189671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:24:08.213583 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:24:08.213817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:24:08.235669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:24:08.236818 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:24:08.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.237044 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:24:08.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.659000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:24:08.251172 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:24:08.251311 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:24:08.269012 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:24:08.269174 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:24:08.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.284269 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:24:08.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.284464 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:24:08.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.302356 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:24:08.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.302446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:24:08.322365 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:24:08.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.322449 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:24:08.342381 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:24:08.342480 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:24:08.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.360356 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:24:08.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.378226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:24:08.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.384189 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:24:08.398227 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:24:08.416247 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:24:08.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.432324 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:24:08.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.432415 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:24:08.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:08.450303 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:24:08.450407 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:24:08.470411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:24:09.042000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:24:09.042000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:24:09.044000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:24:09.044000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:24:09.044000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:24:08.470506 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:24:08.490583 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:24:08.501109 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:24:09.079282 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Jun 25 16:24:08.501270 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:24:08.518926 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:24:08.519126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:24:08.546630 systemd[1]: Stopped target network.target - Network. Jun 25 16:24:08.563400 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:24:08.563483 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:24:08.584663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:24:08.590123 systemd-networkd[723]: eth0: DHCPv6 lease lost Jun 25 16:24:08.604502 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:24:08.624609 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:24:08.624778 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:24:08.642961 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:24:08.643164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:24:08.660787 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:24:08.660847 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:24:08.685364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:24:08.703174 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:24:08.703410 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:24:08.721482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:24:08.721569 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:24:08.739584 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:24:08.739662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:24:08.757448 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:24:08.757540 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:24:08.775664 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:24:08.795034 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:24:08.795181 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:24:08.795987 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:24:08.796302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:24:08.806860 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:24:08.807045 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:24:08.830356 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:24:08.830437 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:24:08.848299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:24:08.848486 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:24:08.868406 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:24:08.868497 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:24:08.888367 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:24:08.888463 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:24:08.913485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:24:08.939291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:24:08.939406 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:24:08.958200 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:24:08.958347 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:24:08.976754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:24:08.976919 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:24:08.994649 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:24:09.019383 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:24:09.039699 systemd[1]: Switching root. Jun 25 16:24:09.091427 systemd-journald[185]: Journal stopped Jun 25 16:24:11.314102 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:24:11.314219 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:24:11.314245 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:24:11.314274 kernel: SELinux: policy capability open_perms=1 Jun 25 16:24:11.314303 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:24:11.314326 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:24:11.314358 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:24:11.314388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:24:11.314411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:24:11.314434 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:24:11.314459 systemd[1]: Successfully loaded SELinux policy in 107.082ms. Jun 25 16:24:11.314510 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.233ms. Jun 25 16:24:11.314536 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:24:11.314561 systemd[1]: Detected virtualization kvm. Jun 25 16:24:11.314586 systemd[1]: Detected architecture x86-64. Jun 25 16:24:11.314614 systemd[1]: Detected first boot. Jun 25 16:24:11.314646 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:24:11.314674 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:24:11.314699 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:24:11.314731 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 16:24:11.314756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:24:11.314782 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:24:11.314811 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:24:11.314835 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:24:11.314860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:24:11.314886 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:24:11.314911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:24:11.314935 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:24:11.314960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:24:11.314984 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:24:11.315013 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:24:11.315050 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:24:11.315075 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:24:11.315099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:24:11.315124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:24:11.315148 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:24:11.315172 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:24:11.315196 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:24:11.315223 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:24:11.315253 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:24:11.315284 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:24:11.315308 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:24:11.315342 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:24:11.315367 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:24:11.315392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:24:11.315416 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:24:11.315441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:24:11.315465 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:24:11.315494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:24:11.315515 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:24:11.315536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:11.315561 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:24:11.315581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:24:11.315783 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:24:11.315822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:24:11.315847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:11.315894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:24:11.315918 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:24:11.315942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:11.315966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:24:11.315989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:11.316011 kernel: ACPI: bus type drm_connector registered Jun 25 16:24:11.316073 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:24:11.316098 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:11.316115 kernel: fuse: init (API version 7.37) Jun 25 16:24:11.316136 kernel: loop: module loaded Jun 25 16:24:11.316151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:24:11.316167 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 16:24:11.316188 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 16:24:11.316203 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:24:11.316220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:24:11.316241 systemd-journald[1107]: Journal started Jun 25 16:24:11.316317 systemd-journald[1107]: Runtime Journal (/run/log/journal/229d986b5689ad1d167f526309bfcfa4) is 8.0M, max 148.7M, 140.7M free. Jun 25 16:24:10.677000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 16:24:10.677000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 16:24:11.305000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:24:11.305000 audit[1107]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe1cb8aab0 a2=4000 a3=7ffe1cb8ab4c items=0 ppid=1 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:11.305000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:24:11.339050 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:24:11.362058 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:24:11.389208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:24:11.416059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:11.428075 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:24:11.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.440139 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:24:11.450445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:24:11.461473 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:24:11.471440 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:24:11.481454 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:24:11.491447 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:24:11.500770 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:24:11.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.510906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:24:11.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.521793 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:24:11.522147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:24:11.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.532884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:11.533189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:11.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.543811 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:24:11.544134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:24:11.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.554799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:11.555116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:11.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.565746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:24:11.566065 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:24:11.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.576747 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:11.577200 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:11.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.587785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:24:11.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.597718 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:24:11.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.608719 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:24:11.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.618894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:24:11.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.629890 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:24:11.648324 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:24:11.666238 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:24:11.676182 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:24:11.686400 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:24:11.704362 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:24:11.714250 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:11.717871 systemd-journald[1107]: Time spent on flushing to /var/log/journal/229d986b5689ad1d167f526309bfcfa4 is 91.382ms for 1028 entries. Jun 25 16:24:11.717871 systemd-journald[1107]: System Journal (/var/log/journal/229d986b5689ad1d167f526309bfcfa4) is 8.0M, max 584.8M, 576.8M free. Jun 25 16:24:11.825013 systemd-journald[1107]: Received client request to flush runtime journal. Jun 25 16:24:11.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.729377 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:24:11.739289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:11.742232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:24:11.762368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:24:11.777348 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:24:11.795584 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:24:11.805344 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:24:11.815788 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:24:11.825793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:24:11.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.836163 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:24:11.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.847113 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:24:11.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:11.861006 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:24:11.881666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:24:11.893257 udevadm[1129]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 16:24:11.926566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:24:11.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.533874 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:24:12.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.549599 kernel: kauditd_printk_skb: 69 callbacks suppressed Jun 25 16:24:12.549765 kernel: audit: type=1130 audit(1719332652.543:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.573478 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:24:12.606438 systemd-udevd[1139]: Using default interface naming scheme 'v252'. Jun 25 16:24:12.644118 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:24:12.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.677058 kernel: audit: type=1130 audit(1719332652.653:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.680315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:24:12.699314 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:24:12.742091 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 16:24:12.806198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:24:12.814143 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1161) Jun 25 16:24:12.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.847082 kernel: audit: type=1130 audit(1719332652.823:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:12.909090 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:24:12.927051 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jun 25 16:24:12.971061 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:24:13.000099 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:24:13.001647 systemd-networkd[1152]: lo: Link UP Jun 25 16:24:13.002116 systemd-networkd[1152]: lo: Gained carrier Jun 25 16:24:13.008432 systemd-networkd[1152]: Enumeration completed Jun 25 16:24:13.008781 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:13.008882 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:24:13.009845 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:24:13.010209 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:24:13.010802 systemd-networkd[1152]: eth0: Link UP Jun 25 16:24:13.010920 systemd-networkd[1152]: eth0: Gained carrier Jun 25 16:24:13.011012 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:24:13.019035 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 25 16:24:13.030160 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 16:24:13.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.066710 kernel: audit: type=1130 audit(1719332653.031:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.066825 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1148) Jun 25 16:24:13.057395 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:24:13.067393 systemd-networkd[1152]: eth0: DHCPv4 address 10.128.0.51/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jun 25 16:24:13.120050 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:24:13.170301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jun 25 16:24:13.180587 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:24:13.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.213074 kernel: audit: type=1130 audit(1719332653.189:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.217457 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:24:13.235461 lvm[1179]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:24:13.268657 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:24:13.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.278559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:24:13.301058 kernel: audit: type=1130 audit(1719332653.277:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.318361 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:24:13.324628 lvm[1181]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:24:13.352503 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:24:13.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.362524 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:24:13.385100 kernel: audit: type=1130 audit(1719332653.361:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.394197 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:24:13.394249 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:24:13.404260 systemd[1]: Reached target machines.target - Containers. Jun 25 16:24:13.423341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:24:13.433338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:13.433433 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:13.440291 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:24:13.452054 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:24:13.471377 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:24:13.490562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:24:13.501800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:24:13.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.512766 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1184 (bootctl) Jun 25 16:24:13.540260 kernel: audit: type=1130 audit(1719332653.511:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.540454 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 16:24:13.546641 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:24:13.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.692308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:24:13.693962 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:24:13.719233 kernel: audit: type=1130 audit(1719332653.694:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.736653 systemd-fsck[1193]: fsck.fat 4.2 (2021-01-31) Jun 25 16:24:13.736653 systemd-fsck[1193]: /dev/sda1: 808 files, 120378/258078 clusters Jun 25 16:24:13.738963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:24:13.753949 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:24:13.781257 kernel: audit: type=1130 audit(1719332653.753:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.784278 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:24:13.813192 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:24:13.819054 kernel: loop1: detected capacity change from 0 to 139360 Jun 25 16:24:13.860560 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:24:13.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:13.935070 kernel: loop2: detected capacity change from 0 to 83864 Jun 25 16:24:14.002139 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:24:14.090805 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 16:24:14.132063 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:24:14.173941 kernel: loop6: detected capacity change from 0 to 83864 Jun 25 16:24:14.212689 kernel: loop7: detected capacity change from 0 to 80584 Jun 25 16:24:14.235559 (sd-sysext)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jun 25 16:24:14.236736 (sd-sysext)[1205]: Merged extensions into '/usr'. Jun 25 16:24:14.241414 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:24:14.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.255773 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:24:14.258502 systemd[1]: Starting ensure-sysext.service... Jun 25 16:24:14.268194 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:24:14.292444 systemd-tmpfiles[1209]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:24:14.294998 systemd-tmpfiles[1209]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:24:14.295696 systemd-tmpfiles[1209]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:24:14.297535 systemd-tmpfiles[1209]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:24:14.300909 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:24:14.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.314802 systemd[1]: Reloading. Jun 25 16:24:14.557649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:14.646271 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:24:14.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:14.672203 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:24:14.689383 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:24:14.706363 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:24:14.725197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:24:14.743000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:24:14.743000 audit[1294]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce4eff070 a2=420 a3=0 items=0 ppid=1276 pid=1294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:14.743000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:24:14.744971 augenrules[1294]: No rules Jun 25 16:24:14.745325 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:24:14.763434 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:24:14.774829 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:24:14.785824 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:24:14.808704 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:24:14.824516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:14.825055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:14.833555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:14.848712 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:14.863716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:14.873362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:14.873689 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:14.881716 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:24:14.892218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:14.894922 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:24:14.906187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:14.906545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:14.917095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:14.917421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:14.928116 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:14.928426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:14.939165 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:24:14.950099 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:14.950360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:14.950523 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:24:14.956489 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:14.957097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:24:14.962211 systemd-networkd[1152]: eth0: Gained IPv6LL Jun 25 16:24:14.963678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:24:14.978709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:24:14.993721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:24:15.010748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:24:15.020568 systemd-timesyncd[1295]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jun 25 16:24:15.021309 systemd-timesyncd[1295]: Initial clock synchronization to Tue 2024-06-25 16:24:14.736558 UTC. Jun 25 16:24:15.023593 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 16:24:15.032327 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:24:15.032648 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:15.032920 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:24:15.033154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:24:15.037216 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:24:15.049407 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:24:15.050676 systemd-resolved[1291]: Positive Trust Anchors: Jun 25 16:24:15.050704 systemd-resolved[1291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:24:15.050756 systemd-resolved[1291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:24:15.057477 systemd-resolved[1291]: Defaulting to hostname 'linux'. Jun 25 16:24:15.060368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:24:15.060698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:24:15.070729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:24:15.081963 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:24:15.082286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:24:15.093059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:24:15.093391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:24:15.104000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:24:15.104448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:24:15.124063 systemd[1]: Reached target network.target - Network. Jun 25 16:24:15.132347 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:24:15.142298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:24:15.152258 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:24:15.162328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:24:15.162417 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:24:15.172480 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:24:15.182365 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:24:15.192598 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:24:15.202574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:24:15.212351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:24:15.222323 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:24:15.222404 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:24:15.231251 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:24:15.241342 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:24:15.253680 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:24:15.262606 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:15.262830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:24:15.264970 systemd[1]: Finished ensure-sysext.service. Jun 25 16:24:15.274884 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 16:24:15.283512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:24:15.305586 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jun 25 16:24:15.339356 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jun 25 16:24:15.351826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:24:15.361207 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:24:15.370247 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:24:15.379561 systemd[1]: System is tainted: cgroupsv1 Jun 25 16:24:15.379680 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:24:15.379721 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:24:15.387300 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:24:15.408433 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:24:15.429427 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:24:15.450707 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:24:15.464693 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:24:15.467312 jq[1347]: false Jun 25 16:24:15.474239 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:24:15.481717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:15.499391 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:24:15.515364 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:24:15.532366 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jun 25 16:24:15.545048 extend-filesystems[1350]: Found loop4 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found loop5 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found loop6 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found loop7 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda1 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda2 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda3 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found usr Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda4 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda6 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda7 Jun 25 16:24:15.545048 extend-filesystems[1350]: Found sda9 Jun 25 16:24:15.545048 extend-filesystems[1350]: Checking size of /dev/sda9 Jun 25 16:24:15.926340 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jun 25 16:24:15.926433 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.767 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.797 INFO Fetch successful Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.797 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.809 INFO Fetch successful Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.823 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.827 INFO Fetch successful Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.827 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jun 25 16:24:15.926510 coreos-metadata[1345]: Jun 25 16:24:15.829 INFO Fetch successful Jun 25 16:24:15.927093 extend-filesystems[1350]: Resized partition /dev/sda9 Jun 25 16:24:15.615132 dbus-daemon[1346]: [system] SELinux support is enabled Jun 25 16:24:15.546261 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:24:15.939905 init.sh[1364]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jun 25 16:24:15.939905 init.sh[1364]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jun 25 16:24:15.939905 init.sh[1364]: + /usr/bin/google_instance_setup Jun 25 16:24:15.940550 extend-filesystems[1379]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:24:15.618584 dbus-daemon[1346]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1152 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 16:24:15.565435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:24:15.963527 extend-filesystems[1379]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 25 16:24:15.963527 extend-filesystems[1379]: old_desc_blocks = 1, new_desc_blocks = 2 Jun 25 16:24:15.963527 extend-filesystems[1379]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jun 25 16:24:15.781636 dbus-daemon[1346]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 16:24:15.592369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:24:16.021935 extend-filesystems[1350]: Resized filesystem in /dev/sda9 Jun 25 16:24:15.618386 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:24:15.639368 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:24:15.639577 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jun 25 16:24:16.031170 update_engine[1385]: I0625 16:24:15.828166 1385 main.cc:92] Flatcar Update Engine starting Jun 25 16:24:16.031170 update_engine[1385]: I0625 16:24:15.865382 1385 update_check_scheduler.cc:74] Next update check in 5m31s Jun 25 16:24:15.645433 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:24:16.032001 jq[1386]: true Jun 25 16:24:15.673293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:24:15.697904 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:24:15.723744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:24:16.032853 tar[1395]: linux-amd64/helm Jun 25 16:24:15.724460 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:24:16.033581 jq[1397]: true Jun 25 16:24:15.727298 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:24:15.729505 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:24:15.741624 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:24:15.757960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:24:15.758734 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:24:15.779327 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:24:15.779419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:24:15.792186 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:24:15.792253 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:24:15.815338 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 16:24:15.846541 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:24:15.871991 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:24:15.878336 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:24:15.948717 systemd-logind[1382]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:24:15.948748 systemd-logind[1382]: Watching system buttons on /dev/input/event3 (Sleep Button) Jun 25 16:24:15.948780 systemd-logind[1382]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:24:15.957445 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:24:15.957870 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:24:15.961538 systemd-logind[1382]: New seat seat0. Jun 25 16:24:15.978438 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:24:16.023512 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:24:16.041230 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:24:16.117499 bash[1424]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:24:16.120300 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:24:16.139669 systemd[1]: Starting sshkeys.service... Jun 25 16:24:16.210287 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:24:16.228872 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:24:16.374899 coreos-metadata[1447]: Jun 25 16:24:16.374 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jun 25 16:24:16.378085 coreos-metadata[1447]: Jun 25 16:24:16.377 INFO Fetch failed with 404: resource not found Jun 25 16:24:16.378337 coreos-metadata[1447]: Jun 25 16:24:16.378 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jun 25 16:24:16.379844 coreos-metadata[1447]: Jun 25 16:24:16.379 INFO Fetch successful Jun 25 16:24:16.379967 coreos-metadata[1447]: Jun 25 16:24:16.379 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jun 25 16:24:16.391160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1431) Jun 25 16:24:16.391301 coreos-metadata[1447]: Jun 25 16:24:16.380 INFO Fetch failed with 404: resource not found Jun 25 16:24:16.391301 coreos-metadata[1447]: Jun 25 16:24:16.380 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jun 25 16:24:16.391301 coreos-metadata[1447]: Jun 25 16:24:16.390 INFO Fetch failed with 404: resource not found Jun 25 16:24:16.391301 coreos-metadata[1447]: Jun 25 16:24:16.390 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jun 25 16:24:16.392853 coreos-metadata[1447]: Jun 25 16:24:16.392 INFO Fetch successful Jun 25 16:24:16.402615 unknown[1447]: wrote ssh authorized keys file for user: core Jun 25 16:24:16.496081 update-ssh-keys[1453]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:24:16.496933 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:24:16.513171 systemd[1]: Finished sshkeys.service. Jun 25 16:24:16.697085 locksmithd[1403]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:24:16.817455 dbus-daemon[1346]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 16:24:16.817731 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 16:24:16.818780 dbus-daemon[1346]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1399 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 16:24:16.836599 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 16:24:16.884527 polkitd[1465]: Started polkitd version 121 Jun 25 16:24:16.898254 polkitd[1465]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 16:24:16.899244 polkitd[1465]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 16:24:16.904058 polkitd[1465]: Finished loading, compiling and executing 2 rules Jun 25 16:24:16.904934 dbus-daemon[1346]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 16:24:16.905225 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 16:24:16.906262 polkitd[1465]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 16:24:16.966941 systemd-hostnamed[1399]: Hostname set to (transient) Jun 25 16:24:16.967866 systemd-resolved[1291]: System hostname changed to 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal'. Jun 25 16:24:17.283867 containerd[1398]: time="2024-06-25T16:24:17.282522559Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:24:17.440155 containerd[1398]: time="2024-06-25T16:24:17.440078470Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:24:17.440155 containerd[1398]: time="2024-06-25T16:24:17.440163595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.447536872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.447611606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448163537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448204800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448338792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448414776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448436035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.448827 containerd[1398]: time="2024-06-25T16:24:17.448526330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.448849426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.448878533Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.448896354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.449223720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.449253793Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.449346479Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:24:17.449371 containerd[1398]: time="2024-06-25T16:24:17.449366111Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459391966Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459518181Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459545702Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459633525Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459715302Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459755330Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.459781868Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460074288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460123378Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460148554Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460173735Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460219474Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460255100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.460319 containerd[1398]: time="2024-06-25T16:24:17.460296750Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460318486Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460357341Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460381598Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460400568Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460435537Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:24:17.461109 containerd[1398]: time="2024-06-25T16:24:17.460687061Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:24:17.461738 containerd[1398]: time="2024-06-25T16:24:17.461676678Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:24:17.461856 containerd[1398]: time="2024-06-25T16:24:17.461768837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.461856 containerd[1398]: time="2024-06-25T16:24:17.461817524Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:24:17.461968 containerd[1398]: time="2024-06-25T16:24:17.461876341Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:24:17.462040 containerd[1398]: time="2024-06-25T16:24:17.461992466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462093 containerd[1398]: time="2024-06-25T16:24:17.462046616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462093 containerd[1398]: time="2024-06-25T16:24:17.462073821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462187 containerd[1398]: time="2024-06-25T16:24:17.462113112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462187 containerd[1398]: time="2024-06-25T16:24:17.462138981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462187 containerd[1398]: time="2024-06-25T16:24:17.462162531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462331 containerd[1398]: time="2024-06-25T16:24:17.462205257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462331 containerd[1398]: time="2024-06-25T16:24:17.462229541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462331 containerd[1398]: time="2024-06-25T16:24:17.462304786Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:24:17.462652 containerd[1398]: time="2024-06-25T16:24:17.462602997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462744 containerd[1398]: time="2024-06-25T16:24:17.462643963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462744 containerd[1398]: time="2024-06-25T16:24:17.462685873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462856 containerd[1398]: time="2024-06-25T16:24:17.462723788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462856 containerd[1398]: time="2024-06-25T16:24:17.462770963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462856 containerd[1398]: time="2024-06-25T16:24:17.462798638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.462856 containerd[1398]: time="2024-06-25T16:24:17.462839870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.463047 containerd[1398]: time="2024-06-25T16:24:17.462862291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:24:17.465509 containerd[1398]: time="2024-06-25T16:24:17.465351739Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:24:17.465862 containerd[1398]: time="2024-06-25T16:24:17.465541216Z" level=info msg="Connect containerd service" Jun 25 16:24:17.465862 containerd[1398]: time="2024-06-25T16:24:17.465655075Z" level=info msg="using legacy CRI server" Jun 25 16:24:17.465862 containerd[1398]: time="2024-06-25T16:24:17.465691415Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:24:17.467370 containerd[1398]: time="2024-06-25T16:24:17.466171773Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:24:17.478543 containerd[1398]: time="2024-06-25T16:24:17.476928097Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:24:17.478543 containerd[1398]: time="2024-06-25T16:24:17.477088439Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:24:17.478543 containerd[1398]: time="2024-06-25T16:24:17.477143248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:24:17.478543 containerd[1398]: time="2024-06-25T16:24:17.477161499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:24:17.478543 containerd[1398]: time="2024-06-25T16:24:17.477178489Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:24:17.479703 containerd[1398]: time="2024-06-25T16:24:17.479601278Z" level=info msg="Start subscribing containerd event" Jun 25 16:24:17.479936 containerd[1398]: time="2024-06-25T16:24:17.479889820Z" level=info msg="Start recovering state" Jun 25 16:24:17.480739 containerd[1398]: time="2024-06-25T16:24:17.480675219Z" level=info msg="Start event monitor" Jun 25 16:24:17.480904 containerd[1398]: time="2024-06-25T16:24:17.480883250Z" level=info msg="Start snapshots syncer" Jun 25 16:24:17.481038 containerd[1398]: time="2024-06-25T16:24:17.481000923Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:24:17.481145 containerd[1398]: time="2024-06-25T16:24:17.481125413Z" level=info msg="Start streaming server" Jun 25 16:24:17.481515 containerd[1398]: time="2024-06-25T16:24:17.480508848Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:24:17.481931 containerd[1398]: time="2024-06-25T16:24:17.481838469Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:24:17.482372 containerd[1398]: time="2024-06-25T16:24:17.482340511Z" level=info msg="containerd successfully booted in 0.215796s" Jun 25 16:24:17.482550 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:24:18.003549 tar[1395]: linux-amd64/LICENSE Jun 25 16:24:18.003549 tar[1395]: linux-amd64/README.md Jun 25 16:24:18.025516 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:24:18.128881 instance-setup[1372]: INFO Running google_set_multiqueue. Jun 25 16:24:18.173530 instance-setup[1372]: INFO Set channels for eth0 to 2. Jun 25 16:24:18.179622 instance-setup[1372]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jun 25 16:24:18.183187 instance-setup[1372]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jun 25 16:24:18.183498 instance-setup[1372]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jun 25 16:24:18.186296 instance-setup[1372]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jun 25 16:24:18.186565 instance-setup[1372]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jun 25 16:24:18.189118 instance-setup[1372]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jun 25 16:24:18.189317 instance-setup[1372]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jun 25 16:24:18.192689 instance-setup[1372]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jun 25 16:24:18.209254 instance-setup[1372]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 25 16:24:18.214428 instance-setup[1372]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jun 25 16:24:18.217174 instance-setup[1372]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jun 25 16:24:18.217435 instance-setup[1372]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jun 25 16:24:18.256104 init.sh[1364]: + /usr/bin/google_metadata_script_runner --script-type startup Jun 25 16:24:18.581378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:18.687127 startup-script[1511]: INFO Starting startup scripts. Jun 25 16:24:18.695269 startup-script[1511]: INFO No startup scripts found in metadata. Jun 25 16:24:18.695638 startup-script[1511]: INFO Finished running startup scripts. Jun 25 16:24:18.743864 init.sh[1364]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jun 25 16:24:18.743864 init.sh[1364]: + daemon_pids=() Jun 25 16:24:18.743864 init.sh[1364]: + for d in accounts clock_skew network Jun 25 16:24:18.745047 init.sh[1364]: + daemon_pids+=($!) Jun 25 16:24:18.745047 init.sh[1364]: + for d in accounts clock_skew network Jun 25 16:24:18.745047 init.sh[1364]: + daemon_pids+=($!) Jun 25 16:24:18.745047 init.sh[1364]: + for d in accounts clock_skew network Jun 25 16:24:18.745047 init.sh[1364]: + daemon_pids+=($!) Jun 25 16:24:18.746575 init.sh[1364]: + NOTIFY_SOCKET=/run/systemd/notify Jun 25 16:24:18.746575 init.sh[1364]: + /usr/bin/systemd-notify --ready Jun 25 16:24:18.747046 init.sh[1520]: + /usr/bin/google_accounts_daemon Jun 25 16:24:18.748560 init.sh[1522]: + /usr/bin/google_network_daemon Jun 25 16:24:18.754783 init.sh[1521]: + /usr/bin/google_clock_skew_daemon Jun 25 16:24:18.812174 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jun 25 16:24:18.826856 init.sh[1364]: + wait -n 1520 1521 1522 Jun 25 16:24:19.522624 sshd_keygen[1390]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:24:19.624759 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:24:19.644710 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:24:19.659310 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:24:19.659719 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:24:19.677668 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:24:19.706631 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:24:19.715558 google-networking[1522]: INFO Starting Google Networking daemon. Jun 25 16:24:19.725828 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:24:19.744824 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:24:19.756431 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:24:19.765412 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:24:19.770409 google-clock-skew[1521]: INFO Starting Google Clock Skew daemon. Jun 25 16:24:19.784764 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:24:19.786574 google-clock-skew[1521]: INFO Clock drift token has changed: 0. Jun 25 16:24:19.804160 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:24:19.804573 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:24:19.817349 systemd[1]: Startup finished in 16.668s (kernel) + 10.565s (userspace) = 27.234s. Jun 25 16:24:19.867431 kubelet[1516]: E0625 16:24:19.867310 1516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:24:19.870372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:24:19.870685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:24:19.900560 groupadd[1555]: group added to /etc/group: name=google-sudoers, GID=1000 Jun 25 16:24:19.904395 groupadd[1555]: group added to /etc/gshadow: name=google-sudoers Jun 25 16:24:19.949828 groupadd[1555]: new group: name=google-sudoers, GID=1000 Jun 25 16:24:19.978156 google-accounts[1520]: INFO Starting Google Accounts daemon. Jun 25 16:24:19.990069 google-accounts[1520]: WARNING OS Login not installed. Jun 25 16:24:19.991989 google-accounts[1520]: INFO Creating a new user account for 0. Jun 25 16:24:19.997082 init.sh[1564]: useradd: invalid user name '0': use --badname to ignore Jun 25 16:24:19.997414 google-accounts[1520]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jun 25 16:24:20.001022 systemd-resolved[1291]: Clock change detected. Flushing caches. Jun 25 16:24:20.001544 google-clock-skew[1521]: INFO Synced system time with hardware clock. Jun 25 16:24:23.395675 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:24:23.407027 systemd[1]: Started sshd@0-10.128.0.51:22-139.178.89.65:59384.service - OpenSSH per-connection server daemon (139.178.89.65:59384). Jun 25 16:24:23.699805 sshd[1566]: Accepted publickey for core from 139.178.89.65 port 59384 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:23.703319 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:23.714365 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:24:23.721817 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:24:23.725587 systemd-logind[1382]: New session 1 of user core. Jun 25 16:24:23.744419 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:24:23.749244 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:24:23.771851 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:23.879015 systemd[1571]: Queued start job for default target default.target. Jun 25 16:24:23.879407 systemd[1571]: Reached target paths.target - Paths. Jun 25 16:24:23.879436 systemd[1571]: Reached target sockets.target - Sockets. Jun 25 16:24:23.879459 systemd[1571]: Reached target timers.target - Timers. Jun 25 16:24:23.879479 systemd[1571]: Reached target basic.target - Basic System. Jun 25 16:24:23.879545 systemd[1571]: Reached target default.target - Main User Target. Jun 25 16:24:23.879604 systemd[1571]: Startup finished in 99ms. Jun 25 16:24:23.880204 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:24:23.894753 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:24:24.123710 systemd[1]: Started sshd@1-10.128.0.51:22-139.178.89.65:59398.service - OpenSSH per-connection server daemon (139.178.89.65:59398). Jun 25 16:24:24.407623 sshd[1580]: Accepted publickey for core from 139.178.89.65 port 59398 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:24.409479 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:24.416204 systemd-logind[1382]: New session 2 of user core. Jun 25 16:24:24.422678 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:24:24.624973 sshd[1580]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:24.630002 systemd[1]: sshd@1-10.128.0.51:22-139.178.89.65:59398.service: Deactivated successfully. Jun 25 16:24:24.631568 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:24:24.634123 systemd-logind[1382]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:24:24.636412 systemd-logind[1382]: Removed session 2. Jun 25 16:24:24.676093 systemd[1]: Started sshd@2-10.128.0.51:22-139.178.89.65:59414.service - OpenSSH per-connection server daemon (139.178.89.65:59414). Jun 25 16:24:24.961754 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 59414 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:24.964239 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:24.971428 systemd-logind[1382]: New session 3 of user core. Jun 25 16:24:24.976875 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:24:25.173929 sshd[1587]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:25.179009 systemd[1]: sshd@2-10.128.0.51:22-139.178.89.65:59414.service: Deactivated successfully. Jun 25 16:24:25.180533 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:24:25.182573 systemd-logind[1382]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:24:25.184764 systemd-logind[1382]: Removed session 3. Jun 25 16:24:25.225049 systemd[1]: Started sshd@3-10.128.0.51:22-139.178.89.65:59420.service - OpenSSH per-connection server daemon (139.178.89.65:59420). Jun 25 16:24:25.513073 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 59420 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:25.515148 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:25.522574 systemd-logind[1382]: New session 4 of user core. Jun 25 16:24:25.529899 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:24:25.731672 sshd[1594]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:25.736922 systemd[1]: sshd@3-10.128.0.51:22-139.178.89.65:59420.service: Deactivated successfully. Jun 25 16:24:25.738852 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:24:25.739465 systemd-logind[1382]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:24:25.741427 systemd-logind[1382]: Removed session 4. Jun 25 16:24:25.781068 systemd[1]: Started sshd@4-10.128.0.51:22-139.178.89.65:59430.service - OpenSSH per-connection server daemon (139.178.89.65:59430). Jun 25 16:24:26.065778 sshd[1601]: Accepted publickey for core from 139.178.89.65 port 59430 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:26.067896 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:26.075756 systemd-logind[1382]: New session 5 of user core. Jun 25 16:24:26.081888 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:24:26.262679 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:24:26.263194 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:24:26.281552 sudo[1605]: pam_unix(sudo:session): session closed for user root Jun 25 16:24:26.325105 sshd[1601]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:26.331306 systemd[1]: sshd@4-10.128.0.51:22-139.178.89.65:59430.service: Deactivated successfully. Jun 25 16:24:26.333564 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:24:26.334382 systemd-logind[1382]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:24:26.335990 systemd-logind[1382]: Removed session 5. Jun 25 16:24:26.375447 systemd[1]: Started sshd@5-10.128.0.51:22-139.178.89.65:33236.service - OpenSSH per-connection server daemon (139.178.89.65:33236). Jun 25 16:24:26.657448 sshd[1609]: Accepted publickey for core from 139.178.89.65 port 33236 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:26.659282 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:26.666059 systemd-logind[1382]: New session 6 of user core. Jun 25 16:24:26.672877 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:24:26.836123 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:24:26.836644 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:24:26.842562 sudo[1614]: pam_unix(sudo:session): session closed for user root Jun 25 16:24:26.857069 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:24:26.857559 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:24:26.880009 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:24:26.881000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:24:26.888139 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:24:26.888273 kernel: audit: type=1305 audit(1719332666.881:131): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:24:26.888337 auditctl[1617]: No rules Jun 25 16:24:26.889490 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:24:26.889928 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:24:26.893770 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:24:26.881000 audit[1617]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc21452cb0 a2=420 a3=0 items=0 ppid=1 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:26.935296 kernel: audit: type=1300 audit(1719332666.881:131): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc21452cb0 a2=420 a3=0 items=0 ppid=1 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:26.881000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:24:26.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:26.952772 augenrules[1635]: No rules Jun 25 16:24:26.954658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:24:26.967642 kernel: audit: type=1327 audit(1719332666.881:131): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:24:26.967803 kernel: audit: type=1131 audit(1719332666.888:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:26.967898 kernel: audit: type=1130 audit(1719332666.953:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:26.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:26.964477 sudo[1613]: pam_unix(sudo:session): session closed for user root Jun 25 16:24:26.963000 audit[1613]: USER_END pid=1613 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.010868 sshd[1609]: pam_unix(sshd:session): session closed for user core Jun 25 16:24:27.015077 kernel: audit: type=1106 audit(1719332666.963:134): pid=1613 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.015207 kernel: audit: type=1104 audit(1719332666.963:135): pid=1613 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:26.963000 audit[1613]: CRED_DISP pid=1613 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.020607 systemd-logind[1382]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:24:27.023218 systemd[1]: sshd@5-10.128.0.51:22-139.178.89.65:33236.service: Deactivated successfully. Jun 25 16:24:27.024627 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:24:27.026654 systemd-logind[1382]: Removed session 6. Jun 25 16:24:27.014000 audit[1609]: USER_END pid=1609 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.038293 kernel: audit: type=1106 audit(1719332667.014:136): pid=1609 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.014000 audit[1609]: CRED_DISP pid=1609 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.070312 kernel: audit: type=1104 audit(1719332667.014:137): pid=1609 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.51:22-139.178.89.65:33236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.118079 kernel: audit: type=1131 audit(1719332667.022:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.51:22-139.178.89.65:33236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.121156 systemd[1]: Started sshd@6-10.128.0.51:22-139.178.89.65:33248.service - OpenSSH per-connection server daemon (139.178.89.65:33248). Jun 25 16:24:27.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.51:22-139.178.89.65:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.406000 audit[1642]: USER_ACCT pid=1642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.409485 sshd[1642]: Accepted publickey for core from 139.178.89.65 port 33248 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:24:27.408000 audit[1642]: CRED_ACQ pid=1642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.409000 audit[1642]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf64f9780 a2=3 a3=7f298ff8a480 items=0 ppid=1 pid=1642 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:27.409000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:24:27.410972 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:24:27.418740 systemd-logind[1382]: New session 7 of user core. Jun 25 16:24:27.424803 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:24:27.433000 audit[1642]: USER_START pid=1642 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.435000 audit[1647]: CRED_ACQ pid=1647 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:24:27.587000 audit[1648]: USER_ACCT pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.588674 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:24:27.589157 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:24:27.587000 audit[1648]: CRED_REFR pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.590000 audit[1648]: USER_START pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:24:27.750150 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:24:28.148071 dockerd[1658]: time="2024-06-25T16:24:28.147870393Z" level=info msg="Starting up" Jun 25 16:24:28.176747 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport357179091-merged.mount: Deactivated successfully. Jun 25 16:24:28.675340 dockerd[1658]: time="2024-06-25T16:24:28.675277462Z" level=info msg="Loading containers: start." Jun 25 16:24:28.764000 audit[1690]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.764000 audit[1690]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff7cd76d90 a2=0 a3=7fcf17443e90 items=0 ppid=1658 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.764000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:24:28.767000 audit[1692]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.767000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffceaae6f70 a2=0 a3=7fb3fb86ae90 items=0 ppid=1658 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.767000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:24:28.770000 audit[1694]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.770000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd8386610 a2=0 a3=7fbde5498e90 items=0 ppid=1658 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.770000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:24:28.773000 audit[1696]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.773000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffecce3c080 a2=0 a3=7f654e3ade90 items=0 ppid=1658 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.773000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:24:28.778000 audit[1698]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.778000 audit[1698]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed3327510 a2=0 a3=7f3df9840e90 items=0 ppid=1658 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.778000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:24:28.781000 audit[1700]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.781000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe02dacf00 a2=0 a3=7f40a002be90 items=0 ppid=1658 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.781000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:24:28.796000 audit[1702]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.796000 audit[1702]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2ee0c340 a2=0 a3=7fb82a8d4e90 items=0 ppid=1658 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:24:28.799000 audit[1704]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.799000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffbfd77dc0 a2=0 a3=7f7a7d4fae90 items=0 ppid=1658 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.799000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:24:28.802000 audit[1706]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.802000 audit[1706]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fffe31ece40 a2=0 a3=7f7385a68e90 items=0 ppid=1658 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.802000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:24:28.815000 audit[1710]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.815000 audit[1710]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc715db700 a2=0 a3=7fb922901e90 items=0 ppid=1658 pid=1710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.815000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:24:28.817000 audit[1711]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.817000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcf776f0a0 a2=0 a3=7fe19a526e90 items=0 ppid=1658 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.817000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:24:28.834288 kernel: Initializing XFRM netlink socket Jun 25 16:24:28.896000 audit[1719]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.896000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff21c69d40 a2=0 a3=7f5407f5de90 items=0 ppid=1658 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:24:28.909000 audit[1722]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.909000 audit[1722]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff1e5540d0 a2=0 a3=7fbf935cfe90 items=0 ppid=1658 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.909000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:24:28.915000 audit[1726]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.915000 audit[1726]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd50668aa0 a2=0 a3=7f3a0c27be90 items=0 ppid=1658 pid=1726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.915000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:24:28.919000 audit[1728]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.919000 audit[1728]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcdec2def0 a2=0 a3=7f0bd73f4e90 items=0 ppid=1658 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.919000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:24:28.922000 audit[1730]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1730 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.922000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc861a4eb0 a2=0 a3=7fe4dc8c3e90 items=0 ppid=1658 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.922000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:24:28.926000 audit[1732]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.926000 audit[1732]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdec1e2a50 a2=0 a3=7f7c8ef8be90 items=0 ppid=1658 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.926000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:24:28.930000 audit[1734]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.930000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe1b032350 a2=0 a3=7f743ab5ae90 items=0 ppid=1658 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.930000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:24:28.942000 audit[1737]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.942000 audit[1737]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff678289e0 a2=0 a3=7f6a8fdefe90 items=0 ppid=1658 pid=1737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.942000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:24:28.947000 audit[1739]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1739 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.947000 audit[1739]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe807c4880 a2=0 a3=7f2def30ce90 items=0 ppid=1658 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.947000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:24:28.951000 audit[1741]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.951000 audit[1741]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff51e99e90 a2=0 a3=7f2ef2487e90 items=0 ppid=1658 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.951000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:24:28.955000 audit[1743]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.955000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe9f1d2470 a2=0 a3=7f101a396e90 items=0 ppid=1658 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:24:28.957103 systemd-networkd[1152]: docker0: Link UP Jun 25 16:24:28.970000 audit[1747]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.970000 audit[1747]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4a2742b0 a2=0 a3=7f18df21ee90 items=0 ppid=1658 pid=1747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.970000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:24:28.972000 audit[1748]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1748 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:28.972000 audit[1748]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd71d636e0 a2=0 a3=7f9b2a109e90 items=0 ppid=1658 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:28.972000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:24:28.974444 dockerd[1658]: time="2024-06-25T16:24:28.974394845Z" level=info msg="Loading containers: done." Jun 25 16:24:29.069921 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck713908177-merged.mount: Deactivated successfully. Jun 25 16:24:29.079537 dockerd[1658]: time="2024-06-25T16:24:29.079465138Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:24:29.079897 dockerd[1658]: time="2024-06-25T16:24:29.079861111Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:24:29.080086 dockerd[1658]: time="2024-06-25T16:24:29.080058911Z" level=info msg="Daemon has completed initialization" Jun 25 16:24:29.122409 dockerd[1658]: time="2024-06-25T16:24:29.122319079Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:24:29.132894 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:24:29.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:29.962477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:24:29.962807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:29.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:29.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:29.972389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:30.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:30.225367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:30.229774 containerd[1398]: time="2024-06-25T16:24:30.229710518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:24:30.334748 kubelet[1800]: E0625 16:24:30.334696 1800 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:24:30.339051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:24:30.339433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:24:30.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:30.980918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850782528.mount: Deactivated successfully. Jun 25 16:24:32.972683 containerd[1398]: time="2024-06-25T16:24:32.972597199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:32.974507 containerd[1398]: time="2024-06-25T16:24:32.974412581Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34611806" Jun 25 16:24:32.976032 containerd[1398]: time="2024-06-25T16:24:32.975982814Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:32.978967 containerd[1398]: time="2024-06-25T16:24:32.978919350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:32.981789 containerd[1398]: time="2024-06-25T16:24:32.981728340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:32.983607 containerd[1398]: time="2024-06-25T16:24:32.983548672Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.751179202s" Jun 25 16:24:32.983841 containerd[1398]: time="2024-06-25T16:24:32.983806169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:24:33.027640 containerd[1398]: time="2024-06-25T16:24:33.027460121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:24:34.912936 containerd[1398]: time="2024-06-25T16:24:34.912803724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:34.914786 containerd[1398]: time="2024-06-25T16:24:34.914709446Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31721425" Jun 25 16:24:34.916446 containerd[1398]: time="2024-06-25T16:24:34.916399201Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:34.919551 containerd[1398]: time="2024-06-25T16:24:34.919501585Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:34.922430 containerd[1398]: time="2024-06-25T16:24:34.922373690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:34.924138 containerd[1398]: time="2024-06-25T16:24:34.924071535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.896436738s" Jun 25 16:24:34.924488 containerd[1398]: time="2024-06-25T16:24:34.924144676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:24:34.962874 containerd[1398]: time="2024-06-25T16:24:34.962809842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:24:36.081900 containerd[1398]: time="2024-06-25T16:24:36.081820156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:36.083938 containerd[1398]: time="2024-06-25T16:24:36.083856758Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16927421" Jun 25 16:24:36.085182 containerd[1398]: time="2024-06-25T16:24:36.085137831Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:36.088578 containerd[1398]: time="2024-06-25T16:24:36.088537427Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:36.092394 containerd[1398]: time="2024-06-25T16:24:36.092351285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:36.094851 containerd[1398]: time="2024-06-25T16:24:36.094790791Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.131905128s" Jun 25 16:24:36.095070 containerd[1398]: time="2024-06-25T16:24:36.095035085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:24:36.130882 containerd[1398]: time="2024-06-25T16:24:36.130833271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:24:37.282599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346615698.mount: Deactivated successfully. Jun 25 16:24:37.831479 containerd[1398]: time="2024-06-25T16:24:37.831398959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.833167 containerd[1398]: time="2024-06-25T16:24:37.833087947Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28120314" Jun 25 16:24:37.835074 containerd[1398]: time="2024-06-25T16:24:37.835013069Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.837685 containerd[1398]: time="2024-06-25T16:24:37.837572364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.839825 containerd[1398]: time="2024-06-25T16:24:37.839771912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:37.840984 containerd[1398]: time="2024-06-25T16:24:37.840925224Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.709785871s" Jun 25 16:24:37.840984 containerd[1398]: time="2024-06-25T16:24:37.840975765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:24:37.875153 containerd[1398]: time="2024-06-25T16:24:37.875077517Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:24:38.290073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430260475.mount: Deactivated successfully. Jun 25 16:24:38.295709 containerd[1398]: time="2024-06-25T16:24:38.295636287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.296663 containerd[1398]: time="2024-06-25T16:24:38.296587842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jun 25 16:24:38.298674 containerd[1398]: time="2024-06-25T16:24:38.298630138Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.301599 containerd[1398]: time="2024-06-25T16:24:38.301561651Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.304902 containerd[1398]: time="2024-06-25T16:24:38.304859932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:38.306555 containerd[1398]: time="2024-06-25T16:24:38.306510139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 431.189308ms" Jun 25 16:24:38.306756 containerd[1398]: time="2024-06-25T16:24:38.306719564Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:24:38.341498 containerd[1398]: time="2024-06-25T16:24:38.341441239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:24:38.888130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602830225.mount: Deactivated successfully. Jun 25 16:24:40.590979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:24:40.591338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:40.621315 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:24:40.621472 kernel: audit: type=1130 audit(1719332680.590:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.644309 kernel: audit: type=1131 audit(1719332680.590:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.622847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:40.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:40.900649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:40.923325 kernel: audit: type=1130 audit(1719332680.899:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:41.058841 kubelet[1958]: E0625 16:24:41.058762 1958 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:24:41.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:41.063006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:24:41.063359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:24:41.085372 kernel: audit: type=1131 audit(1719332681.062:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:41.404911 containerd[1398]: time="2024-06-25T16:24:41.404063848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.406522 containerd[1398]: time="2024-06-25T16:24:41.406447475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Jun 25 16:24:41.408505 containerd[1398]: time="2024-06-25T16:24:41.408461330Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.412241 containerd[1398]: time="2024-06-25T16:24:41.412200907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.417505 containerd[1398]: time="2024-06-25T16:24:41.417460556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:41.419790 containerd[1398]: time="2024-06-25T16:24:41.419733812Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.078218468s" Jun 25 16:24:41.419993 containerd[1398]: time="2024-06-25T16:24:41.419957217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:24:41.458960 containerd[1398]: time="2024-06-25T16:24:41.458878141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:24:41.876457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960712500.mount: Deactivated successfully. Jun 25 16:24:42.531010 containerd[1398]: time="2024-06-25T16:24:42.530945685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:42.532477 containerd[1398]: time="2024-06-25T16:24:42.532409878Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16194623" Jun 25 16:24:42.534164 containerd[1398]: time="2024-06-25T16:24:42.534121123Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:42.536499 containerd[1398]: time="2024-06-25T16:24:42.536465094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:42.538602 containerd[1398]: time="2024-06-25T16:24:42.538566602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:24:42.539704 containerd[1398]: time="2024-06-25T16:24:42.539654986Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.080697483s" Jun 25 16:24:42.540216 containerd[1398]: time="2024-06-25T16:24:42.539713317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:24:46.537820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:46.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.581673 kernel: audit: type=1130 audit(1719332686.537:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.581809 kernel: audit: type=1131 audit(1719332686.537:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:46.584641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:46.616519 systemd[1]: Reloading. Jun 25 16:24:46.904283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:47.018929 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 16:24:47.042320 kernel: audit: type=1131 audit(1719332687.018:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.064357 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 16:24:47.064515 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 16:24:47.065200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:47.087355 kernel: audit: type=1130 audit(1719332687.064:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:24:47.089435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:47.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.317466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:47.345105 kernel: audit: type=1130 audit(1719332687.320:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:47.410519 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:47.410519 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:24:47.410519 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:47.411339 kubelet[2127]: I0625 16:24:47.410619 2127 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:24:48.422361 kubelet[2127]: I0625 16:24:48.422223 2127 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:24:48.423095 kubelet[2127]: I0625 16:24:48.422366 2127 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:24:48.423095 kubelet[2127]: I0625 16:24:48.422832 2127 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:24:48.446018 kubelet[2127]: E0625 16:24:48.445985 2127 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.446286 kubelet[2127]: I0625 16:24:48.446136 2127 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:24:48.463900 kubelet[2127]: I0625 16:24:48.463842 2127 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:24:48.466377 kubelet[2127]: I0625 16:24:48.466334 2127 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:24:48.466676 kubelet[2127]: I0625 16:24:48.466633 2127 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:24:48.467497 kubelet[2127]: I0625 16:24:48.467457 2127 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:24:48.467497 kubelet[2127]: I0625 16:24:48.467494 2127 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:24:48.469017 kubelet[2127]: I0625 16:24:48.468974 2127 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:48.471097 kubelet[2127]: I0625 16:24:48.471043 2127 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:24:48.471097 kubelet[2127]: I0625 16:24:48.471086 2127 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:24:48.471319 kubelet[2127]: I0625 16:24:48.471134 2127 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:24:48.471319 kubelet[2127]: I0625 16:24:48.471163 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:24:48.473405 kubelet[2127]: W0625 16:24:48.473308 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.473683 kubelet[2127]: E0625 16:24:48.473662 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.473958 kubelet[2127]: W0625 16:24:48.473912 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.474107 kubelet[2127]: E0625 16:24:48.474091 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.474364 kubelet[2127]: I0625 16:24:48.474344 2127 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:24:48.486457 kubelet[2127]: W0625 16:24:48.486425 2127 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:24:48.487363 kubelet[2127]: I0625 16:24:48.487336 2127 server.go:1232] "Started kubelet" Jun 25 16:24:48.487780 kubelet[2127]: I0625 16:24:48.487732 2127 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:24:48.489240 kubelet[2127]: I0625 16:24:48.488842 2127 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:24:48.491267 kubelet[2127]: I0625 16:24:48.491215 2127 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:24:48.491718 kubelet[2127]: I0625 16:24:48.491701 2127 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:24:48.492303 kubelet[2127]: E0625 16:24:48.492119 2127 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal.17dc4bf88400c8f3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", UID:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 24, 48, 487303411, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 24, 48, 487303411, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal"}': 'Post "https://10.128.0.51:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.51:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:24:48.492871 kubelet[2127]: E0625 16:24:48.492850 2127 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:24:48.493001 kubelet[2127]: E0625 16:24:48.492987 2127 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:24:48.494756 kubelet[2127]: I0625 16:24:48.494733 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:24:48.496483 kubelet[2127]: I0625 16:24:48.496461 2127 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:24:48.499219 kubelet[2127]: I0625 16:24:48.499195 2127 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:24:48.499494 kubelet[2127]: I0625 16:24:48.499477 2127 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:24:48.500018 kubelet[2127]: E0625 16:24:48.499995 2127 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" not found" Jun 25 16:24:48.498000 audit[2137]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.502102 kubelet[2127]: E0625 16:24:48.502066 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="200ms" Jun 25 16:24:48.502411 kubelet[2127]: W0625 16:24:48.502366 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.502578 kubelet[2127]: E0625 16:24:48.502561 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.516287 kernel: audit: type=1325 audit(1719332688.498:186): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.516424 kernel: audit: type=1300 audit(1719332688.498:186): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff2870ab00 a2=0 a3=7f1f1166ee90 items=0 ppid=2127 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.498000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff2870ab00 a2=0 a3=7f1f1166ee90 items=0 ppid=2127 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:24:48.567839 kernel: audit: type=1327 audit(1719332688.498:186): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:24:48.549000 audit[2139]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.549000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd5a1be70 a2=0 a3=7f63ad6c7e90 items=0 ppid=2127 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.617295 kernel: audit: type=1325 audit(1719332688.549:187): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.617470 kernel: audit: type=1300 audit(1719332688.549:187): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd5a1be70 a2=0 a3=7f63ad6c7e90 items=0 ppid=2127 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:24:48.579000 audit[2143]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.579000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffefb7bcbe0 a2=0 a3=7f6ef90a8e90 items=0 ppid=2127 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:48.583000 audit[2145]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.583000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe581bd4f0 a2=0 a3=7f831726ee90 items=0 ppid=2127 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:24:48.619969 kubelet[2127]: I0625 16:24:48.619937 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.621854 kubelet[2127]: I0625 16:24:48.621168 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:24:48.622043 kubelet[2127]: I0625 16:24:48.622025 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:24:48.622194 kubelet[2127]: I0625 16:24:48.622182 2127 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:48.624582 kubelet[2127]: E0625 16:24:48.624560 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.625326 kubelet[2127]: I0625 16:24:48.625306 2127 policy_none.go:49] "None policy: Start" Jun 25 16:24:48.626535 kubelet[2127]: I0625 16:24:48.626513 2127 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:24:48.626626 kubelet[2127]: I0625 16:24:48.626548 2127 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:24:48.626000 audit[2150]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.626000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe01d362b0 a2=0 a3=7f8f3af02e90 items=0 ppid=2127 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.626000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:24:48.628475 kubelet[2127]: I0625 16:24:48.628231 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:24:48.628000 audit[2152]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.628000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda93511c0 a2=0 a3=7f337246ee90 items=0 ppid=2127 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:24:48.631000 audit[2151]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:48.631000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc0516dcf0 a2=0 a3=7ff2f9508e90 items=0 ppid=2127 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.631000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:24:48.634452 kubelet[2127]: I0625 16:24:48.634429 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:24:48.634564 kubelet[2127]: I0625 16:24:48.634461 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:24:48.634564 kubelet[2127]: I0625 16:24:48.634485 2127 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:24:48.634564 kubelet[2127]: E0625 16:24:48.634549 2127 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:24:48.635000 audit[2153]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.635000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff64207060 a2=0 a3=7fc414bebe90 items=0 ppid=2127 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:24:48.637000 audit[2154]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:48.637000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd29d5cf10 a2=0 a3=7efd44419e90 items=0 ppid=2127 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.637000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:24:48.641000 audit[2155]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:24:48.641000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcda71a20 a2=0 a3=7fc7cbaefe90 items=0 ppid=2127 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.642908 kubelet[2127]: W0625 16:24:48.642816 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.642908 kubelet[2127]: E0625 16:24:48.642861 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:48.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:24:48.644042 kubelet[2127]: I0625 16:24:48.644014 2127 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:24:48.644432 kubelet[2127]: I0625 16:24:48.644408 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:24:48.643000 audit[2156]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:48.643000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe4a2a6a00 a2=0 a3=7f887c82be90 items=0 ppid=2127 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:24:48.651000 audit[2157]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:24:48.651000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdf8b78200 a2=0 a3=7fb612684e90 items=0 ppid=2127 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:24:48.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:24:48.652983 kubelet[2127]: E0625 16:24:48.652961 2127 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" not found" Jun 25 16:24:48.703731 kubelet[2127]: E0625 16:24:48.703565 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="400ms" Jun 25 16:24:48.735073 kubelet[2127]: I0625 16:24:48.735006 2127 topology_manager.go:215] "Topology Admit Handler" podUID="7f1284e3afd708b4901e0e4b0c076469" podNamespace="kube-system" podName="kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.741643 kubelet[2127]: I0625 16:24:48.741607 2127 topology_manager.go:215] "Topology Admit Handler" podUID="0aadab51f6639bdac574e2899803ad05" podNamespace="kube-system" podName="kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.757619 kubelet[2127]: I0625 16:24:48.757585 2127 topology_manager.go:215] "Topology Admit Handler" podUID="848510a13e1cbdfbfdb4f17a564ff963" podNamespace="kube-system" podName="kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819224 kubelet[2127]: I0625 16:24:48.819176 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-ca-certs\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819443 kubelet[2127]: I0625 16:24:48.819247 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-flexvolume-dir\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819443 kubelet[2127]: I0625 16:24:48.819300 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-k8s-certs\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819443 kubelet[2127]: I0625 16:24:48.819377 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-k8s-certs\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819443 kubelet[2127]: I0625 16:24:48.819423 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819682 kubelet[2127]: I0625 16:24:48.819456 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-ca-certs\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819682 kubelet[2127]: I0625 16:24:48.819493 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-kubeconfig\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819682 kubelet[2127]: I0625 16:24:48.819532 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.819682 kubelet[2127]: I0625 16:24:48.819570 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/848510a13e1cbdfbfdb4f17a564ff963-kubeconfig\") pod \"kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"848510a13e1cbdfbfdb4f17a564ff963\") " pod="kube-system/kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.830344 kubelet[2127]: I0625 16:24:48.830311 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:48.830767 kubelet[2127]: E0625 16:24:48.830728 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:49.073181 containerd[1398]: time="2024-06-25T16:24:49.072486233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:7f1284e3afd708b4901e0e4b0c076469,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:49.087039 containerd[1398]: time="2024-06-25T16:24:49.086916185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:848510a13e1cbdfbfdb4f17a564ff963,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:49.087347 containerd[1398]: time="2024-06-25T16:24:49.086916391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:0aadab51f6639bdac574e2899803ad05,Namespace:kube-system,Attempt:0,}" Jun 25 16:24:49.104525 kubelet[2127]: E0625 16:24:49.104474 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="800ms" Jun 25 16:24:49.237295 kubelet[2127]: I0625 16:24:49.236982 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:49.237518 kubelet[2127]: E0625 16:24:49.237423 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:49.359711 kubelet[2127]: W0625 16:24:49.359547 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:49.359711 kubelet[2127]: E0625 16:24:49.359629 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:49.468488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61491320.mount: Deactivated successfully. Jun 25 16:24:49.479928 containerd[1398]: time="2024-06-25T16:24:49.479875567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.481411 containerd[1398]: time="2024-06-25T16:24:49.481344466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jun 25 16:24:49.482686 containerd[1398]: time="2024-06-25T16:24:49.482643881Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.484076 containerd[1398]: time="2024-06-25T16:24:49.484021586Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.485767 containerd[1398]: time="2024-06-25T16:24:49.485667526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.487335 containerd[1398]: time="2024-06-25T16:24:49.487249132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:24:49.488035 containerd[1398]: time="2024-06-25T16:24:49.487975658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:24:49.489743 containerd[1398]: time="2024-06-25T16:24:49.489684046Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.491155 containerd[1398]: time="2024-06-25T16:24:49.491104634Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.492300 containerd[1398]: time="2024-06-25T16:24:49.492241891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.494765 containerd[1398]: time="2024-06-25T16:24:49.494716306Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.495773 containerd[1398]: time="2024-06-25T16:24:49.495696070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.496770 containerd[1398]: time="2024-06-25T16:24:49.496734136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.500832 containerd[1398]: time="2024-06-25T16:24:49.500784026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.729693ms" Jun 25 16:24:49.501593 containerd[1398]: time="2024-06-25T16:24:49.501538157Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.502907 containerd[1398]: time="2024-06-25T16:24:49.502859918Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 430.222177ms" Jun 25 16:24:49.518225 containerd[1398]: time="2024-06-25T16:24:49.518155020Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:24:49.519426 containerd[1398]: time="2024-06-25T16:24:49.519368278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 431.842037ms" Jun 25 16:24:49.682637 containerd[1398]: time="2024-06-25T16:24:49.682235401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:49.682637 containerd[1398]: time="2024-06-25T16:24:49.682348016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.682637 containerd[1398]: time="2024-06-25T16:24:49.682386324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:49.682637 containerd[1398]: time="2024-06-25T16:24:49.682413346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.686791 containerd[1398]: time="2024-06-25T16:24:49.686359119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:49.686791 containerd[1398]: time="2024-06-25T16:24:49.686454574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.686791 containerd[1398]: time="2024-06-25T16:24:49.686486551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:49.686791 containerd[1398]: time="2024-06-25T16:24:49.686509145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.692783 containerd[1398]: time="2024-06-25T16:24:49.692390876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:24:49.692783 containerd[1398]: time="2024-06-25T16:24:49.692470220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.692783 containerd[1398]: time="2024-06-25T16:24:49.692504967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:24:49.692783 containerd[1398]: time="2024-06-25T16:24:49.692531612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:24:49.817829 containerd[1398]: time="2024-06-25T16:24:49.817689673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:0aadab51f6639bdac574e2899803ad05,Namespace:kube-system,Attempt:0,} returns sandbox id \"c144e3244c022dc767ffab6515f53bb16c0259f3c7470a7aa5576a828f4295fe\"" Jun 25 16:24:49.820245 kubelet[2127]: E0625 16:24:49.820217 2127 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flat" Jun 25 16:24:49.825051 containerd[1398]: time="2024-06-25T16:24:49.825008508Z" level=info msg="CreateContainer within sandbox \"c144e3244c022dc767ffab6515f53bb16c0259f3c7470a7aa5576a828f4295fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:24:49.826046 containerd[1398]: time="2024-06-25T16:24:49.825826054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:7f1284e3afd708b4901e0e4b0c076469,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba9486b11d403b31bac8daa5a60fe3fa6dca1227dd0c152a59b5db55c264ae19\"" Jun 25 16:24:49.831631 kubelet[2127]: E0625 16:24:49.831603 2127 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-21291" Jun 25 16:24:49.834299 containerd[1398]: time="2024-06-25T16:24:49.834244525Z" level=info msg="CreateContainer within sandbox \"ba9486b11d403b31bac8daa5a60fe3fa6dca1227dd0c152a59b5db55c264ae19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:24:49.840995 containerd[1398]: time="2024-06-25T16:24:49.840953740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal,Uid:848510a13e1cbdfbfdb4f17a564ff963,Namespace:kube-system,Attempt:0,} returns sandbox id \"21de550c2b99111bd3c20f2b1191451975603626dcafe8f951fd6bff61b94531\"" Jun 25 16:24:49.842905 kubelet[2127]: E0625 16:24:49.842567 2127 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-21291" Jun 25 16:24:49.844080 containerd[1398]: time="2024-06-25T16:24:49.844041408Z" level=info msg="CreateContainer within sandbox \"21de550c2b99111bd3c20f2b1191451975603626dcafe8f951fd6bff61b94531\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:24:49.861036 containerd[1398]: time="2024-06-25T16:24:49.860989041Z" level=info msg="CreateContainer within sandbox \"ba9486b11d403b31bac8daa5a60fe3fa6dca1227dd0c152a59b5db55c264ae19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d3dd0480caeedc15fe8df17b28944205bffb37832575d1a996b69c9e35196cbb\"" Jun 25 16:24:49.862201 containerd[1398]: time="2024-06-25T16:24:49.862163517Z" level=info msg="StartContainer for \"d3dd0480caeedc15fe8df17b28944205bffb37832575d1a996b69c9e35196cbb\"" Jun 25 16:24:49.872595 containerd[1398]: time="2024-06-25T16:24:49.872530868Z" level=info msg="CreateContainer within sandbox \"c144e3244c022dc767ffab6515f53bb16c0259f3c7470a7aa5576a828f4295fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8cd57c01ce5d0c92e9bfb94056d9e72abfdfe2c296b051b8dd993b4193c98b0b\"" Jun 25 16:24:49.873499 containerd[1398]: time="2024-06-25T16:24:49.873456737Z" level=info msg="StartContainer for \"8cd57c01ce5d0c92e9bfb94056d9e72abfdfe2c296b051b8dd993b4193c98b0b\"" Jun 25 16:24:49.885483 containerd[1398]: time="2024-06-25T16:24:49.885437826Z" level=info msg="CreateContainer within sandbox \"21de550c2b99111bd3c20f2b1191451975603626dcafe8f951fd6bff61b94531\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0ab37bc8b44af7190e66ff4c995ecdf0269dc1aeca90190a134faed5f031432\"" Jun 25 16:24:49.886217 containerd[1398]: time="2024-06-25T16:24:49.886181384Z" level=info msg="StartContainer for \"a0ab37bc8b44af7190e66ff4c995ecdf0269dc1aeca90190a134faed5f031432\"" Jun 25 16:24:49.905940 kubelet[2127]: E0625 16:24:49.905888 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.51:6443: connect: connection refused" interval="1.6s" Jun 25 16:24:50.042416 kubelet[2127]: W0625 16:24:50.041107 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:50.042416 kubelet[2127]: E0625 16:24:50.041193 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:50.046097 containerd[1398]: time="2024-06-25T16:24:50.046028423Z" level=info msg="StartContainer for \"8cd57c01ce5d0c92e9bfb94056d9e72abfdfe2c296b051b8dd993b4193c98b0b\" returns successfully" Jun 25 16:24:50.054319 kubelet[2127]: I0625 16:24:50.051604 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:50.054319 kubelet[2127]: E0625 16:24:50.052189 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.51:6443/api/v1/nodes\": dial tcp 10.128.0.51:6443: connect: connection refused" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:50.064813 kubelet[2127]: W0625 16:24:50.064698 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:50.064813 kubelet[2127]: E0625 16:24:50.064778 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.51:6443: connect: connection refused Jun 25 16:24:50.069893 containerd[1398]: time="2024-06-25T16:24:50.069843749Z" level=info msg="StartContainer for \"d3dd0480caeedc15fe8df17b28944205bffb37832575d1a996b69c9e35196cbb\" returns successfully" Jun 25 16:24:50.120900 containerd[1398]: time="2024-06-25T16:24:50.120836229Z" level=info msg="StartContainer for \"a0ab37bc8b44af7190e66ff4c995ecdf0269dc1aeca90190a134faed5f031432\" returns successfully" Jun 25 16:24:51.659400 kubelet[2127]: I0625 16:24:51.659365 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:53.987768 kubelet[2127]: E0625 16:24:53.987703 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" not found" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:54.043447 kubelet[2127]: I0625 16:24:54.043404 2127 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:54.476925 kubelet[2127]: I0625 16:24:54.476855 2127 apiserver.go:52] "Watching apiserver" Jun 25 16:24:54.500217 kubelet[2127]: I0625 16:24:54.500149 2127 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:24:56.461013 kubelet[2127]: W0625 16:24:56.460969 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 25 16:24:56.487462 systemd[1]: Reloading. Jun 25 16:24:56.752488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:24:56.900514 kubelet[2127]: I0625 16:24:56.900459 2127 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:24:56.901066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:56.917104 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:24:56.917741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:56.929325 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 16:24:56.929406 kernel: audit: type=1131 audit(1719332696.916:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:56.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:56.947301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:24:57.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:57.185467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:24:57.210339 kernel: audit: type=1130 audit(1719332697.184:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:24:57.311618 kubelet[2481]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:57.312338 kubelet[2481]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:24:57.312440 kubelet[2481]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:24:57.312694 kubelet[2481]: I0625 16:24:57.312648 2481 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:24:57.320101 kubelet[2481]: I0625 16:24:57.320075 2481 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:24:57.320291 kubelet[2481]: I0625 16:24:57.320274 2481 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:24:57.320672 kubelet[2481]: I0625 16:24:57.320655 2481 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:24:57.323438 kubelet[2481]: I0625 16:24:57.323414 2481 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:24:57.325636 kubelet[2481]: I0625 16:24:57.325618 2481 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:24:57.342449 kubelet[2481]: I0625 16:24:57.342426 2481 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:24:57.343360 kubelet[2481]: I0625 16:24:57.343339 2481 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:24:57.343854 kubelet[2481]: I0625 16:24:57.343804 2481 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:24:57.344026 kubelet[2481]: I0625 16:24:57.344013 2481 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:24:57.344108 kubelet[2481]: I0625 16:24:57.344099 2481 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:24:57.344221 kubelet[2481]: I0625 16:24:57.344212 2481 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:57.344434 kubelet[2481]: I0625 16:24:57.344423 2481 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:24:57.349739 kubelet[2481]: I0625 16:24:57.346050 2481 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:24:57.349739 kubelet[2481]: I0625 16:24:57.346104 2481 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:24:57.349739 kubelet[2481]: I0625 16:24:57.346133 2481 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:24:57.361843 kubelet[2481]: I0625 16:24:57.361812 2481 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:24:57.366407 kubelet[2481]: I0625 16:24:57.366384 2481 server.go:1232] "Started kubelet" Jun 25 16:24:57.373388 kubelet[2481]: E0625 16:24:57.373359 2481 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:24:57.374092 kubelet[2481]: E0625 16:24:57.374073 2481 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:24:57.375634 kubelet[2481]: I0625 16:24:57.375201 2481 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:24:57.379808 kubelet[2481]: I0625 16:24:57.379783 2481 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:24:57.381281 kubelet[2481]: I0625 16:24:57.381247 2481 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:24:57.383101 kubelet[2481]: I0625 16:24:57.383079 2481 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:24:57.383513 kubelet[2481]: I0625 16:24:57.383491 2481 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:24:57.384690 kubelet[2481]: I0625 16:24:57.384673 2481 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:24:57.384968 kubelet[2481]: I0625 16:24:57.384949 2481 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:24:57.385305 kubelet[2481]: I0625 16:24:57.385288 2481 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:24:57.451058 kubelet[2481]: I0625 16:24:57.450930 2481 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:24:57.465300 kubelet[2481]: I0625 16:24:57.465243 2481 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:24:57.465511 kubelet[2481]: I0625 16:24:57.465498 2481 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:24:57.465730 kubelet[2481]: I0625 16:24:57.465716 2481 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:24:57.465895 kubelet[2481]: E0625 16:24:57.465882 2481 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:24:57.494375 kubelet[2481]: I0625 16:24:57.494330 2481 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.507776 kubelet[2481]: I0625 16:24:57.507744 2481 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.508100 kubelet[2481]: I0625 16:24:57.508083 2481 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.566068 kubelet[2481]: E0625 16:24:57.566029 2481 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:24:57.580232 kubelet[2481]: I0625 16:24:57.580192 2481 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:24:57.580232 kubelet[2481]: I0625 16:24:57.580222 2481 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:24:57.580232 kubelet[2481]: I0625 16:24:57.580247 2481 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:24:57.580889 kubelet[2481]: I0625 16:24:57.580546 2481 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:24:57.580889 kubelet[2481]: I0625 16:24:57.580582 2481 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:24:57.580889 kubelet[2481]: I0625 16:24:57.580595 2481 policy_none.go:49] "None policy: Start" Jun 25 16:24:57.582201 kubelet[2481]: I0625 16:24:57.582157 2481 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:24:57.583845 kubelet[2481]: I0625 16:24:57.582318 2481 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:24:57.583845 kubelet[2481]: I0625 16:24:57.582557 2481 state_mem.go:75] "Updated machine memory state" Jun 25 16:24:57.585950 kubelet[2481]: I0625 16:24:57.585918 2481 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:24:57.586749 kubelet[2481]: I0625 16:24:57.586727 2481 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:24:57.768161 kubelet[2481]: I0625 16:24:57.767119 2481 topology_manager.go:215] "Topology Admit Handler" podUID="848510a13e1cbdfbfdb4f17a564ff963" podNamespace="kube-system" podName="kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.768161 kubelet[2481]: I0625 16:24:57.767306 2481 topology_manager.go:215] "Topology Admit Handler" podUID="7f1284e3afd708b4901e0e4b0c076469" podNamespace="kube-system" podName="kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.768161 kubelet[2481]: I0625 16:24:57.767392 2481 topology_manager.go:215] "Topology Admit Handler" podUID="0aadab51f6639bdac574e2899803ad05" podNamespace="kube-system" podName="kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.773657 kubelet[2481]: W0625 16:24:57.773626 2481 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 25 16:24:57.776465 kubelet[2481]: W0625 16:24:57.776423 2481 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 25 16:24:57.776659 kubelet[2481]: E0625 16:24:57.776549 2481 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.778449 kubelet[2481]: W0625 16:24:57.778425 2481 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 25 16:24:57.787466 kubelet[2481]: I0625 16:24:57.787429 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.787748 kubelet[2481]: I0625 16:24:57.787726 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-ca-certs\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.787895 kubelet[2481]: I0625 16:24:57.787874 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-flexvolume-dir\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788030 kubelet[2481]: I0625 16:24:57.787927 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788030 kubelet[2481]: I0625 16:24:57.787967 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/848510a13e1cbdfbfdb4f17a564ff963-kubeconfig\") pod \"kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"848510a13e1cbdfbfdb4f17a564ff963\") " pod="kube-system/kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788030 kubelet[2481]: I0625 16:24:57.788005 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-ca-certs\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788205 kubelet[2481]: I0625 16:24:57.788044 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f1284e3afd708b4901e0e4b0c076469-k8s-certs\") pod \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"7f1284e3afd708b4901e0e4b0c076469\") " pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788205 kubelet[2481]: I0625 16:24:57.788082 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-k8s-certs\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:57.788205 kubelet[2481]: I0625 16:24:57.788121 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0aadab51f6639bdac574e2899803ad05-kubeconfig\") pod \"kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" (UID: \"0aadab51f6639bdac574e2899803ad05\") " pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:58.358885 kubelet[2481]: I0625 16:24:58.358841 2481 apiserver.go:52] "Watching apiserver" Jun 25 16:24:58.385500 kubelet[2481]: I0625 16:24:58.385433 2481 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:24:58.545741 kubelet[2481]: W0625 16:24:58.545705 2481 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jun 25 16:24:58.546098 kubelet[2481]: E0625 16:24:58.546078 2481 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:24:58.580065 kubelet[2481]: I0625 16:24:58.580020 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" podStartSLOduration=1.579939143 podCreationTimestamp="2024-06-25 16:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:58.570074396 +0000 UTC m=+1.363687841" watchObservedRunningTime="2024-06-25 16:24:58.579939143 +0000 UTC m=+1.373552584" Jun 25 16:24:58.596539 kubelet[2481]: I0625 16:24:58.596493 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" podStartSLOduration=2.596434971 podCreationTimestamp="2024-06-25 16:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:58.580993584 +0000 UTC m=+1.374607025" watchObservedRunningTime="2024-06-25 16:24:58.596434971 +0000 UTC m=+1.390048417" Jun 25 16:24:58.609661 kubelet[2481]: I0625 16:24:58.609536 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" podStartSLOduration=1.6094849980000001 podCreationTimestamp="2024-06-25 16:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:24:58.596750139 +0000 UTC m=+1.390363581" watchObservedRunningTime="2024-06-25 16:24:58.609484998 +0000 UTC m=+1.403098439" Jun 25 16:25:01.438196 update_engine[1385]: I0625 16:25:01.437317 1385 update_attempter.cc:509] Updating boot flags... Jun 25 16:25:01.651292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2547) Jun 25 16:25:01.832298 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2547) Jun 25 16:25:02.662862 sudo[1648]: pam_unix(sudo:session): session closed for user root Jun 25 16:25:02.661000 audit[1648]: USER_END pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:02.689309 kernel: audit: type=1106 audit(1719332702.661:200): pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:02.689527 kernel: audit: type=1104 audit(1719332702.661:201): pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:02.661000 audit[1648]: CRED_DISP pid=1648 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:25:02.708601 sshd[1642]: pam_unix(sshd:session): session closed for user core Jun 25 16:25:02.712759 kernel: audit: type=1106 audit(1719332702.708:202): pid=1642 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:02.708000 audit[1642]: USER_END pid=1642 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:02.716786 systemd-logind[1382]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:25:02.720494 systemd[1]: sshd@6-10.128.0.51:22-139.178.89.65:33248.service: Deactivated successfully. Jun 25 16:25:02.721932 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:25:02.724649 systemd-logind[1382]: Removed session 7. Jun 25 16:25:02.708000 audit[1642]: CRED_DISP pid=1642 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:02.768540 kernel: audit: type=1104 audit(1719332702.708:203): pid=1642 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:02.768735 kernel: audit: type=1131 audit(1719332702.716:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.51:22-139.178.89.65:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:02.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.51:22-139.178.89.65:33248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:09.140519 kubelet[2481]: I0625 16:25:09.140468 2481 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:25:09.141359 containerd[1398]: time="2024-06-25T16:25:09.141280609Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:25:09.141871 kubelet[2481]: I0625 16:25:09.141738 2481 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:25:09.876368 kubelet[2481]: I0625 16:25:09.876308 2481 topology_manager.go:215] "Topology Admit Handler" podUID="4ff761f2-4b42-4ed6-b59c-2794ba382df1" podNamespace="kube-system" podName="kube-proxy-qcdgf" Jun 25 16:25:09.988362 kubelet[2481]: I0625 16:25:09.988304 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ff761f2-4b42-4ed6-b59c-2794ba382df1-lib-modules\") pod \"kube-proxy-qcdgf\" (UID: \"4ff761f2-4b42-4ed6-b59c-2794ba382df1\") " pod="kube-system/kube-proxy-qcdgf" Jun 25 16:25:09.988616 kubelet[2481]: I0625 16:25:09.988377 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ff761f2-4b42-4ed6-b59c-2794ba382df1-xtables-lock\") pod \"kube-proxy-qcdgf\" (UID: \"4ff761f2-4b42-4ed6-b59c-2794ba382df1\") " pod="kube-system/kube-proxy-qcdgf" Jun 25 16:25:09.988616 kubelet[2481]: I0625 16:25:09.988415 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ff761f2-4b42-4ed6-b59c-2794ba382df1-kube-proxy\") pod \"kube-proxy-qcdgf\" (UID: \"4ff761f2-4b42-4ed6-b59c-2794ba382df1\") " pod="kube-system/kube-proxy-qcdgf" Jun 25 16:25:09.988616 kubelet[2481]: I0625 16:25:09.988447 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2p8\" (UniqueName: \"kubernetes.io/projected/4ff761f2-4b42-4ed6-b59c-2794ba382df1-kube-api-access-4k2p8\") pod \"kube-proxy-qcdgf\" (UID: \"4ff761f2-4b42-4ed6-b59c-2794ba382df1\") " pod="kube-system/kube-proxy-qcdgf" Jun 25 16:25:10.121817 kubelet[2481]: I0625 16:25:10.121779 2481 topology_manager.go:215] "Topology Admit Handler" podUID="51b1c235-50c8-495d-b98c-91b2e1f5843f" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-4ndhq" Jun 25 16:25:10.181640 containerd[1398]: time="2024-06-25T16:25:10.181576107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcdgf,Uid:4ff761f2-4b42-4ed6-b59c-2794ba382df1,Namespace:kube-system,Attempt:0,}" Jun 25 16:25:10.189755 kubelet[2481]: I0625 16:25:10.189703 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51b1c235-50c8-495d-b98c-91b2e1f5843f-var-lib-calico\") pod \"tigera-operator-76c4974c85-4ndhq\" (UID: \"51b1c235-50c8-495d-b98c-91b2e1f5843f\") " pod="tigera-operator/tigera-operator-76c4974c85-4ndhq" Jun 25 16:25:10.190387 kubelet[2481]: I0625 16:25:10.189790 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-424mt\" (UniqueName: \"kubernetes.io/projected/51b1c235-50c8-495d-b98c-91b2e1f5843f-kube-api-access-424mt\") pod \"tigera-operator-76c4974c85-4ndhq\" (UID: \"51b1c235-50c8-495d-b98c-91b2e1f5843f\") " pod="tigera-operator/tigera-operator-76c4974c85-4ndhq" Jun 25 16:25:10.229625 containerd[1398]: time="2024-06-25T16:25:10.229426556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:10.229625 containerd[1398]: time="2024-06-25T16:25:10.229547425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:10.229989 containerd[1398]: time="2024-06-25T16:25:10.229912505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:10.229989 containerd[1398]: time="2024-06-25T16:25:10.229940380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:10.270511 systemd[1]: run-containerd-runc-k8s.io-a7cef43c65f327de68100b36919ce9b9a218037cb000188b927103ac8ecf939c-runc.0HUGBo.mount: Deactivated successfully. Jun 25 16:25:10.311195 containerd[1398]: time="2024-06-25T16:25:10.311149873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qcdgf,Uid:4ff761f2-4b42-4ed6-b59c-2794ba382df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7cef43c65f327de68100b36919ce9b9a218037cb000188b927103ac8ecf939c\"" Jun 25 16:25:10.315877 containerd[1398]: time="2024-06-25T16:25:10.315807103Z" level=info msg="CreateContainer within sandbox \"a7cef43c65f327de68100b36919ce9b9a218037cb000188b927103ac8ecf939c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:25:10.336917 containerd[1398]: time="2024-06-25T16:25:10.336844703Z" level=info msg="CreateContainer within sandbox \"a7cef43c65f327de68100b36919ce9b9a218037cb000188b927103ac8ecf939c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0357b25d7dcbc645dc4ca47def4b1f424aa48fe62acc4188ca7aeaa8b49b2da\"" Jun 25 16:25:10.338489 containerd[1398]: time="2024-06-25T16:25:10.338452768Z" level=info msg="StartContainer for \"a0357b25d7dcbc645dc4ca47def4b1f424aa48fe62acc4188ca7aeaa8b49b2da\"" Jun 25 16:25:10.421194 containerd[1398]: time="2024-06-25T16:25:10.421127961Z" level=info msg="StartContainer for \"a0357b25d7dcbc645dc4ca47def4b1f424aa48fe62acc4188ca7aeaa8b49b2da\" returns successfully" Jun 25 16:25:10.433509 containerd[1398]: time="2024-06-25T16:25:10.433388678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-4ndhq,Uid:51b1c235-50c8-495d-b98c-91b2e1f5843f,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:25:10.476885 containerd[1398]: time="2024-06-25T16:25:10.476774085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:10.477240 containerd[1398]: time="2024-06-25T16:25:10.477166638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:10.477554 containerd[1398]: time="2024-06-25T16:25:10.477473869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:10.477782 containerd[1398]: time="2024-06-25T16:25:10.477718687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:10.573247 kernel: audit: type=1325 audit(1719332710.554:205): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.554000 audit[2713]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.554000 audit[2713]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1528a0e0 a2=0 a3=7ffc1528a0cc items=0 ppid=2636 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.616494 kernel: audit: type=1300 audit(1719332710.554:205): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1528a0e0 a2=0 a3=7ffc1528a0cc items=0 ppid=2636 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.554000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:25:10.630834 containerd[1398]: time="2024-06-25T16:25:10.630765852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-4ndhq,Uid:51b1c235-50c8-495d-b98c-91b2e1f5843f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"08fdf81d71dec97a84724cfb65454c8e7c71eff2ff1b0000f714816a81ab34c8\"" Jun 25 16:25:10.578000 audit[2715]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.649781 kernel: audit: type=1327 audit(1719332710.554:205): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:25:10.649992 kernel: audit: type=1325 audit(1719332710.578:206): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.578000 audit[2715]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff5ca4ca0 a2=0 a3=7ffff5ca4c8c items=0 ppid=2636 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.656184 containerd[1398]: time="2024-06-25T16:25:10.654964317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:25:10.656306 kubelet[2481]: E0625 16:25:10.654101 2481 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Jun 25 16:25:10.682925 kernel: audit: type=1300 audit(1719332710.578:206): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff5ca4ca0 a2=0 a3=7ffff5ca4c8c items=0 ppid=2636 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.684230 kernel: audit: type=1327 audit(1719332710.578:206): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:25:10.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:25:10.594000 audit[2714]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.716294 kernel: audit: type=1325 audit(1719332710.594:207): table=mangle:40 family=2 entries=1 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.594000 audit[2714]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe85da9f40 a2=0 a3=7ffe85da9f2c items=0 ppid=2636 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.749303 kernel: audit: type=1300 audit(1719332710.594:207): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe85da9f40 a2=0 a3=7ffe85da9f2c items=0 ppid=2636 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:25:10.599000 audit[2717]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.780842 kernel: audit: type=1327 audit(1719332710.594:207): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:25:10.781081 kernel: audit: type=1325 audit(1719332710.599:208): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.599000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc9dfc990 a2=0 a3=7ffcc9dfc97c items=0 ppid=2636 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:25:10.603000 audit[2716]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.603000 audit[2716]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc54216eb0 a2=0 a3=7ffc54216e9c items=0 ppid=2636 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:25:10.616000 audit[2719]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.616000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff168de990 a2=0 a3=7fff168de97c items=0 ppid=2636 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:25:10.645000 audit[2726]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.645000 audit[2726]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe24a67a30 a2=0 a3=7ffe24a67a1c items=0 ppid=2636 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:25:10.657000 audit[2728]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.657000 audit[2728]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcd739e560 a2=0 a3=7ffcd739e54c items=0 ppid=2636 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.657000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:25:10.668000 audit[2731]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2731 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.668000 audit[2731]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd09a4ce50 a2=0 a3=7ffd09a4ce3c items=0 ppid=2636 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:25:10.671000 audit[2732]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.671000 audit[2732]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd94d39fa0 a2=0 a3=7ffd94d39f8c items=0 ppid=2636 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:25:10.676000 audit[2734]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.676000 audit[2734]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc289656e0 a2=0 a3=7ffc289656cc items=0 ppid=2636 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:25:10.678000 audit[2735]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2735 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.678000 audit[2735]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd1205260 a2=0 a3=7ffdd120524c items=0 ppid=2636 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:25:10.686000 audit[2737]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2737 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.686000 audit[2737]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff60e0c080 a2=0 a3=7fff60e0c06c items=0 ppid=2636 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:25:10.696000 audit[2740]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.696000 audit[2740]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc08a46b0 a2=0 a3=7ffdc08a469c items=0 ppid=2636 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:25:10.698000 audit[2741]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.698000 audit[2741]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce4700e50 a2=0 a3=7ffce4700e3c items=0 ppid=2636 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:25:10.704000 audit[2743]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.704000 audit[2743]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff56753e90 a2=0 a3=7fff56753e7c items=0 ppid=2636 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:25:10.704000 audit[2744]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.704000 audit[2744]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4b6fbc90 a2=0 a3=7ffd4b6fbc7c items=0 ppid=2636 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:25:10.709000 audit[2746]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2746 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.709000 audit[2746]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd44663320 a2=0 a3=7ffd4466330c items=0 ppid=2636 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:25:10.720000 audit[2749]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.720000 audit[2749]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe73c89270 a2=0 a3=7ffe73c8925c items=0 ppid=2636 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:25:10.792000 audit[2752]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.792000 audit[2752]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb2a28880 a2=0 a3=7fffb2a2886c items=0 ppid=2636 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:25:10.794000 audit[2753]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2753 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.794000 audit[2753]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff620cd150 a2=0 a3=7fff620cd13c items=0 ppid=2636 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:25:10.798000 audit[2755]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.798000 audit[2755]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc9cf66970 a2=0 a3=7ffc9cf6695c items=0 ppid=2636 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:25:10.807000 audit[2758]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.807000 audit[2758]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8dff9f40 a2=0 a3=7fff8dff9f2c items=0 ppid=2636 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:25:10.810000 audit[2759]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.810000 audit[2759]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7e59a210 a2=0 a3=7ffc7e59a1fc items=0 ppid=2636 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:25:10.815000 audit[2761]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:25:10.815000 audit[2761]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe415cc3d0 a2=0 a3=7ffe415cc3bc items=0 ppid=2636 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:25:10.845000 audit[2767]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:10.845000 audit[2767]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe410447f0 a2=0 a3=7ffe410447dc items=0 ppid=2636 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:10.857000 audit[2767]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:10.857000 audit[2767]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe410447f0 a2=0 a3=7ffe410447dc items=0 ppid=2636 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.857000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:10.860000 audit[2773]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.860000 audit[2773]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff72e6e120 a2=0 a3=7fff72e6e10c items=0 ppid=2636 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:25:10.868000 audit[2775]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.868000 audit[2775]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc10b44340 a2=0 a3=7ffc10b4432c items=0 ppid=2636 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:25:10.876000 audit[2778]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.876000 audit[2778]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffce344a430 a2=0 a3=7ffce344a41c items=0 ppid=2636 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:25:10.879000 audit[2779]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.879000 audit[2779]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaf5235d0 a2=0 a3=7ffcaf5235bc items=0 ppid=2636 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:25:10.884000 audit[2781]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.884000 audit[2781]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc40193e80 a2=0 a3=7ffc40193e6c items=0 ppid=2636 pid=2781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:25:10.887000 audit[2782]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.887000 audit[2782]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7fcfa790 a2=0 a3=7ffc7fcfa77c items=0 ppid=2636 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:25:10.891000 audit[2784]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.891000 audit[2784]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4fde1ea0 a2=0 a3=7fff4fde1e8c items=0 ppid=2636 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:25:10.898000 audit[2787]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2787 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.898000 audit[2787]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe3b21f040 a2=0 a3=7ffe3b21f02c items=0 ppid=2636 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:25:10.901000 audit[2788]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.901000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffab9d580 a2=0 a3=7ffffab9d56c items=0 ppid=2636 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:25:10.905000 audit[2790]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.905000 audit[2790]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9150abc0 a2=0 a3=7ffe9150abac items=0 ppid=2636 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:25:10.907000 audit[2791]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2791 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.907000 audit[2791]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff674c8f70 a2=0 a3=7fff674c8f5c items=0 ppid=2636 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:25:10.914000 audit[2793]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.914000 audit[2793]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdacb76db0 a2=0 a3=7ffdacb76d9c items=0 ppid=2636 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:25:10.920000 audit[2796]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2796 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.920000 audit[2796]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3cf2fe00 a2=0 a3=7ffe3cf2fdec items=0 ppid=2636 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:25:10.926000 audit[2799]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2799 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.926000 audit[2799]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd357697d0 a2=0 a3=7ffd357697bc items=0 ppid=2636 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:25:10.929000 audit[2800]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2800 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.929000 audit[2800]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc7e1dc220 a2=0 a3=7ffc7e1dc20c items=0 ppid=2636 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:25:10.934000 audit[2802]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2802 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.934000 audit[2802]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff4d31fb00 a2=0 a3=7fff4d31faec items=0 ppid=2636 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:25:10.941000 audit[2805]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2805 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.941000 audit[2805]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff12eeea20 a2=0 a3=7fff12eeea0c items=0 ppid=2636 pid=2805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:25:10.943000 audit[2806]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.943000 audit[2806]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4ca542b0 a2=0 a3=7ffe4ca5429c items=0 ppid=2636 pid=2806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:25:10.948000 audit[2808]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2808 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.948000 audit[2808]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdf3be6090 a2=0 a3=7ffdf3be607c items=0 ppid=2636 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:25:10.951000 audit[2809]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.951000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0f6d2ab0 a2=0 a3=7fff0f6d2a9c items=0 ppid=2636 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:25:10.955000 audit[2811]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.955000 audit[2811]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffed6dc220 a2=0 a3=7fffed6dc20c items=0 ppid=2636 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:25:10.962000 audit[2814]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2814 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:25:10.962000 audit[2814]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd0a2175c0 a2=0 a3=7ffd0a2175ac items=0 ppid=2636 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:25:10.967000 audit[2816]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:25:10.967000 audit[2816]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffca32a8ba0 a2=0 a3=7ffca32a8b8c items=0 ppid=2636 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.967000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:10.968000 audit[2816]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:25:10.968000 audit[2816]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffca32a8ba0 a2=0 a3=7ffca32a8b8c items=0 ppid=2636 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:10.968000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:12.542132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522144093.mount: Deactivated successfully. Jun 25 16:25:13.431657 containerd[1398]: time="2024-06-25T16:25:13.431589365Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:13.433135 containerd[1398]: time="2024-06-25T16:25:13.433073265Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076060" Jun 25 16:25:13.434783 containerd[1398]: time="2024-06-25T16:25:13.434740041Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:13.437301 containerd[1398]: time="2024-06-25T16:25:13.437209769Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:13.439485 containerd[1398]: time="2024-06-25T16:25:13.439447923Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:13.440608 containerd[1398]: time="2024-06-25T16:25:13.440560138Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.785516888s" Jun 25 16:25:13.440701 containerd[1398]: time="2024-06-25T16:25:13.440614665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:25:13.443365 containerd[1398]: time="2024-06-25T16:25:13.443325264Z" level=info msg="CreateContainer within sandbox \"08fdf81d71dec97a84724cfb65454c8e7c71eff2ff1b0000f714816a81ab34c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:25:13.461677 containerd[1398]: time="2024-06-25T16:25:13.461622757Z" level=info msg="CreateContainer within sandbox \"08fdf81d71dec97a84724cfb65454c8e7c71eff2ff1b0000f714816a81ab34c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8232888b9e2f861350fbca308e84b12463862564bde462b4c2bdb7e96f403167\"" Jun 25 16:25:13.462755 containerd[1398]: time="2024-06-25T16:25:13.462708864Z" level=info msg="StartContainer for \"8232888b9e2f861350fbca308e84b12463862564bde462b4c2bdb7e96f403167\"" Jun 25 16:25:13.510597 systemd[1]: run-containerd-runc-k8s.io-8232888b9e2f861350fbca308e84b12463862564bde462b4c2bdb7e96f403167-runc.yAe9dH.mount: Deactivated successfully. Jun 25 16:25:13.552587 containerd[1398]: time="2024-06-25T16:25:13.552506103Z" level=info msg="StartContainer for \"8232888b9e2f861350fbca308e84b12463862564bde462b4c2bdb7e96f403167\" returns successfully" Jun 25 16:25:13.595967 kubelet[2481]: I0625 16:25:13.595927 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qcdgf" podStartSLOduration=4.595870365 podCreationTimestamp="2024-06-25 16:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:25:10.608687867 +0000 UTC m=+13.402301315" watchObservedRunningTime="2024-06-25 16:25:13.595870365 +0000 UTC m=+16.389483807" Jun 25 16:25:13.596878 kubelet[2481]: I0625 16:25:13.596851 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-4ndhq" podStartSLOduration=0.788935854 podCreationTimestamp="2024-06-25 16:25:10 +0000 UTC" firstStartedPulling="2024-06-25 16:25:10.633126323 +0000 UTC m=+13.426739743" lastFinishedPulling="2024-06-25 16:25:13.440997094 +0000 UTC m=+16.234610527" observedRunningTime="2024-06-25 16:25:13.595170657 +0000 UTC m=+16.388784100" watchObservedRunningTime="2024-06-25 16:25:13.596806638 +0000 UTC m=+16.390420080" Jun 25 16:25:16.652000 audit[2865]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.659329 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 16:25:16.659459 kernel: audit: type=1325 audit(1719332716.652:256): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.652000 audit[2865]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc3b555760 a2=0 a3=7ffc3b55574c items=0 ppid=2636 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.708378 kernel: audit: type=1300 audit(1719332716.652:256): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc3b555760 a2=0 a3=7ffc3b55574c items=0 ppid=2636 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.708591 kernel: audit: type=1327 audit(1719332716.652:256): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.654000 audit[2865]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.740115 kernel: audit: type=1325 audit(1719332716.654:257): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.740296 kernel: audit: type=1300 audit(1719332716.654:257): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3b555760 a2=0 a3=0 items=0 ppid=2636 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.654000 audit[2865]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3b555760 a2=0 a3=0 items=0 ppid=2636 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.787896 kernel: audit: type=1327 audit(1719332716.654:257): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.719000 audit[2867]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.719000 audit[2867]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc9aa08590 a2=0 a3=7ffc9aa0857c items=0 ppid=2636 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.844230 kernel: audit: type=1325 audit(1719332716.719:258): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.844437 kernel: audit: type=1300 audit(1719332716.719:258): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc9aa08590 a2=0 a3=7ffc9aa0857c items=0 ppid=2636 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.862289 kernel: audit: type=1327 audit(1719332716.719:258): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.865985 kubelet[2481]: I0625 16:25:16.865934 2481 topology_manager.go:215] "Topology Admit Handler" podUID="18ca7d62-ae72-4dd0-af51-c9b8606173bf" podNamespace="calico-system" podName="calico-typha-5d5cf8c7fb-kq52b" Jun 25 16:25:16.801000 audit[2867]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.897286 kernel: audit: type=1325 audit(1719332716.801:259): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:16.801000 audit[2867]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9aa08590 a2=0 a3=0 items=0 ppid=2636 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:16.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:16.961465 kubelet[2481]: I0625 16:25:16.961375 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18ca7d62-ae72-4dd0-af51-c9b8606173bf-tigera-ca-bundle\") pod \"calico-typha-5d5cf8c7fb-kq52b\" (UID: \"18ca7d62-ae72-4dd0-af51-c9b8606173bf\") " pod="calico-system/calico-typha-5d5cf8c7fb-kq52b" Jun 25 16:25:16.961673 kubelet[2481]: I0625 16:25:16.961497 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/18ca7d62-ae72-4dd0-af51-c9b8606173bf-typha-certs\") pod \"calico-typha-5d5cf8c7fb-kq52b\" (UID: \"18ca7d62-ae72-4dd0-af51-c9b8606173bf\") " pod="calico-system/calico-typha-5d5cf8c7fb-kq52b" Jun 25 16:25:16.961673 kubelet[2481]: I0625 16:25:16.961581 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzln2\" (UniqueName: \"kubernetes.io/projected/18ca7d62-ae72-4dd0-af51-c9b8606173bf-kube-api-access-xzln2\") pod \"calico-typha-5d5cf8c7fb-kq52b\" (UID: \"18ca7d62-ae72-4dd0-af51-c9b8606173bf\") " pod="calico-system/calico-typha-5d5cf8c7fb-kq52b" Jun 25 16:25:16.968530 kubelet[2481]: I0625 16:25:16.968472 2481 topology_manager.go:215] "Topology Admit Handler" podUID="8a77298d-5084-442e-82d2-9318b0748548" podNamespace="calico-system" podName="calico-node-s9m7z" Jun 25 16:25:17.061994 kubelet[2481]: I0625 16:25:17.061953 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-lib-modules\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.062391 kubelet[2481]: I0625 16:25:17.062363 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-cni-bin-dir\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.062645 kubelet[2481]: I0625 16:25:17.062622 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-policysync\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.062860 kubelet[2481]: I0625 16:25:17.062839 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-var-lib-calico\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.063073 kubelet[2481]: I0625 16:25:17.063044 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a77298d-5084-442e-82d2-9318b0748548-tigera-ca-bundle\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.063243 kubelet[2481]: I0625 16:25:17.063225 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-flexvol-driver-host\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.063454 kubelet[2481]: I0625 16:25:17.063424 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-xtables-lock\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.063677 kubelet[2481]: I0625 16:25:17.063656 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8a77298d-5084-442e-82d2-9318b0748548-node-certs\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.063909 kubelet[2481]: I0625 16:25:17.063888 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-var-run-calico\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.064107 kubelet[2481]: I0625 16:25:17.064069 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-cni-net-dir\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.064313 kubelet[2481]: I0625 16:25:17.064292 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8a77298d-5084-442e-82d2-9318b0748548-cni-log-dir\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.064485 kubelet[2481]: I0625 16:25:17.064455 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8jj\" (UniqueName: \"kubernetes.io/projected/8a77298d-5084-442e-82d2-9318b0748548-kube-api-access-bx8jj\") pod \"calico-node-s9m7z\" (UID: \"8a77298d-5084-442e-82d2-9318b0748548\") " pod="calico-system/calico-node-s9m7z" Jun 25 16:25:17.089519 kubelet[2481]: I0625 16:25:17.089476 2481 topology_manager.go:215] "Topology Admit Handler" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" podNamespace="calico-system" podName="csi-node-driver-p75ks" Jun 25 16:25:17.095523 kubelet[2481]: E0625 16:25:17.095486 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:17.165572 kubelet[2481]: I0625 16:25:17.165440 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0f943ce0-f09c-40ca-9640-4ebcb02d1c9f-varrun\") pod \"csi-node-driver-p75ks\" (UID: \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\") " pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:17.165919 kubelet[2481]: I0625 16:25:17.165896 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0f943ce0-f09c-40ca-9640-4ebcb02d1c9f-socket-dir\") pod \"csi-node-driver-p75ks\" (UID: \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\") " pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:17.166140 kubelet[2481]: I0625 16:25:17.166124 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f943ce0-f09c-40ca-9640-4ebcb02d1c9f-kubelet-dir\") pod \"csi-node-driver-p75ks\" (UID: \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\") " pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:17.166318 kubelet[2481]: I0625 16:25:17.166301 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0f943ce0-f09c-40ca-9640-4ebcb02d1c9f-registration-dir\") pod \"csi-node-driver-p75ks\" (UID: \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\") " pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:17.166566 kubelet[2481]: I0625 16:25:17.166546 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pw74\" (UniqueName: \"kubernetes.io/projected/0f943ce0-f09c-40ca-9640-4ebcb02d1c9f-kube-api-access-8pw74\") pod \"csi-node-driver-p75ks\" (UID: \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\") " pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:17.184293 containerd[1398]: time="2024-06-25T16:25:17.182147611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5cf8c7fb-kq52b,Uid:18ca7d62-ae72-4dd0-af51-c9b8606173bf,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:17.189082 kubelet[2481]: E0625 16:25:17.189056 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.189315 kubelet[2481]: W0625 16:25:17.189290 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.190010 kubelet[2481]: E0625 16:25:17.189966 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.233783 kubelet[2481]: E0625 16:25:17.233733 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.234214 kubelet[2481]: W0625 16:25:17.234182 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.235225 kubelet[2481]: E0625 16:25:17.235182 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.262237 containerd[1398]: time="2024-06-25T16:25:17.260479714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:17.262237 containerd[1398]: time="2024-06-25T16:25:17.260594849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:17.262237 containerd[1398]: time="2024-06-25T16:25:17.260651836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:17.262237 containerd[1398]: time="2024-06-25T16:25:17.260675885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:17.271301 kubelet[2481]: E0625 16:25:17.271250 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.271301 kubelet[2481]: W0625 16:25:17.271297 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.271618 kubelet[2481]: E0625 16:25:17.271335 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.271849 kubelet[2481]: E0625 16:25:17.271825 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.271849 kubelet[2481]: W0625 16:25:17.271848 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.271992 kubelet[2481]: E0625 16:25:17.271881 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.272677 kubelet[2481]: E0625 16:25:17.272654 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.272677 kubelet[2481]: W0625 16:25:17.272676 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.272878 kubelet[2481]: E0625 16:25:17.272703 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.274231 kubelet[2481]: E0625 16:25:17.274209 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.274231 kubelet[2481]: W0625 16:25:17.274230 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.274742 kubelet[2481]: E0625 16:25:17.274386 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.275022 kubelet[2481]: E0625 16:25:17.274994 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.275022 kubelet[2481]: W0625 16:25:17.275022 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.275196 kubelet[2481]: E0625 16:25:17.275174 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.276602 kubelet[2481]: E0625 16:25:17.276581 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.276602 kubelet[2481]: W0625 16:25:17.276602 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.276993 kubelet[2481]: E0625 16:25:17.276734 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.276993 kubelet[2481]: E0625 16:25:17.276951 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.276993 kubelet[2481]: W0625 16:25:17.276963 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.277380 kubelet[2481]: E0625 16:25:17.277084 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.277380 kubelet[2481]: E0625 16:25:17.277298 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.277380 kubelet[2481]: W0625 16:25:17.277320 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.277674 kubelet[2481]: E0625 16:25:17.277448 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.277825 kubelet[2481]: E0625 16:25:17.277705 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.277825 kubelet[2481]: W0625 16:25:17.277717 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.278020 kubelet[2481]: E0625 16:25:17.277835 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.278168 kubelet[2481]: E0625 16:25:17.278023 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.278168 kubelet[2481]: W0625 16:25:17.278035 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.278168 kubelet[2481]: E0625 16:25:17.278151 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.278554 kubelet[2481]: E0625 16:25:17.278364 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.278554 kubelet[2481]: W0625 16:25:17.278375 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.278554 kubelet[2481]: E0625 16:25:17.278500 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.287555 kubelet[2481]: E0625 16:25:17.287530 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.287555 kubelet[2481]: W0625 16:25:17.287555 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.288098 kubelet[2481]: E0625 16:25:17.287693 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.288677 containerd[1398]: time="2024-06-25T16:25:17.288622345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9m7z,Uid:8a77298d-5084-442e-82d2-9318b0748548,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:17.290910 kubelet[2481]: E0625 16:25:17.290886 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.290910 kubelet[2481]: W0625 16:25:17.290911 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.291432 kubelet[2481]: E0625 16:25:17.291346 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.291432 kubelet[2481]: W0625 16:25:17.291370 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.292111 kubelet[2481]: E0625 16:25:17.292025 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.292111 kubelet[2481]: W0625 16:25:17.292044 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.294307 kubelet[2481]: E0625 16:25:17.294234 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.295615 kubelet[2481]: W0625 16:25:17.294438 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295082 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295111 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295146 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295191 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295494 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.295615 kubelet[2481]: W0625 16:25:17.295529 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.295615 kubelet[2481]: E0625 16:25:17.295556 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.296748 kubelet[2481]: E0625 16:25:17.296701 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.296915 kubelet[2481]: W0625 16:25:17.296718 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.297093 kubelet[2481]: E0625 16:25:17.297043 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.297707 kubelet[2481]: E0625 16:25:17.297625 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.297707 kubelet[2481]: W0625 16:25:17.297645 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.297707 kubelet[2481]: E0625 16:25:17.297668 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.299075 kubelet[2481]: E0625 16:25:17.298456 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.299075 kubelet[2481]: W0625 16:25:17.298472 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.299075 kubelet[2481]: E0625 16:25:17.298493 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.299075 kubelet[2481]: E0625 16:25:17.298783 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.299075 kubelet[2481]: W0625 16:25:17.298796 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.299075 kubelet[2481]: E0625 16:25:17.298819 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.300104 kubelet[2481]: E0625 16:25:17.299816 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.300104 kubelet[2481]: W0625 16:25:17.299834 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.300104 kubelet[2481]: E0625 16:25:17.299857 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.300956 kubelet[2481]: E0625 16:25:17.300894 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.300956 kubelet[2481]: W0625 16:25:17.300911 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.301656 kubelet[2481]: E0625 16:25:17.301637 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.301808 kubelet[2481]: W0625 16:25:17.301787 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.302308 kubelet[2481]: E0625 16:25:17.302287 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.302451 kubelet[2481]: W0625 16:25:17.302435 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.302615 kubelet[2481]: E0625 16:25:17.302599 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.302803 kubelet[2481]: E0625 16:25:17.302785 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.303394 kubelet[2481]: E0625 16:25:17.303355 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.319882 kubelet[2481]: E0625 16:25:17.319848 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:17.320078 kubelet[2481]: W0625 16:25:17.320061 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:17.320220 kubelet[2481]: E0625 16:25:17.320208 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:17.383459 containerd[1398]: time="2024-06-25T16:25:17.383346672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:17.383766 containerd[1398]: time="2024-06-25T16:25:17.383704175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:17.383922 containerd[1398]: time="2024-06-25T16:25:17.383789483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:17.383922 containerd[1398]: time="2024-06-25T16:25:17.383832884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:17.479291 containerd[1398]: time="2024-06-25T16:25:17.475154533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5cf8c7fb-kq52b,Uid:18ca7d62-ae72-4dd0-af51-c9b8606173bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"42fff1b361cee7e63d2ea8d98aa38ed9cd405cb6b6e7b1c50e4fdf57e667bdd7\"" Jun 25 16:25:17.485019 containerd[1398]: time="2024-06-25T16:25:17.484973682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:25:17.520831 containerd[1398]: time="2024-06-25T16:25:17.520771281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s9m7z,Uid:8a77298d-5084-442e-82d2-9318b0748548,Namespace:calico-system,Attempt:0,} returns sandbox id \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\"" Jun 25 16:25:17.979000 audit[2984]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:17.979000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc9bf04bf0 a2=0 a3=7ffc9bf04bdc items=0 ppid=2636 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:17.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:17.981000 audit[2984]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:17.981000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc9bf04bf0 a2=0 a3=0 items=0 ppid=2636 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:17.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:18.467727 kubelet[2481]: E0625 16:25:18.466947 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:19.941904 containerd[1398]: time="2024-06-25T16:25:19.941814763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:19.943618 containerd[1398]: time="2024-06-25T16:25:19.943538719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:25:19.945749 containerd[1398]: time="2024-06-25T16:25:19.945697201Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:19.948421 containerd[1398]: time="2024-06-25T16:25:19.948375071Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:19.951934 containerd[1398]: time="2024-06-25T16:25:19.951863494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:19.952799 containerd[1398]: time="2024-06-25T16:25:19.952729169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.467347037s" Jun 25 16:25:19.953037 containerd[1398]: time="2024-06-25T16:25:19.952829349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:25:19.954673 containerd[1398]: time="2024-06-25T16:25:19.954634144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:25:19.997681 containerd[1398]: time="2024-06-25T16:25:19.997597018Z" level=info msg="CreateContainer within sandbox \"42fff1b361cee7e63d2ea8d98aa38ed9cd405cb6b6e7b1c50e4fdf57e667bdd7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:25:20.036924 containerd[1398]: time="2024-06-25T16:25:20.036816760Z" level=info msg="CreateContainer within sandbox \"42fff1b361cee7e63d2ea8d98aa38ed9cd405cb6b6e7b1c50e4fdf57e667bdd7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"361d572e5954aa7493779036b65e4fde506b877fb5588bd8bc8dd33fd5392a0b\"" Jun 25 16:25:20.041331 containerd[1398]: time="2024-06-25T16:25:20.041205158Z" level=info msg="StartContainer for \"361d572e5954aa7493779036b65e4fde506b877fb5588bd8bc8dd33fd5392a0b\"" Jun 25 16:25:20.217755 containerd[1398]: time="2024-06-25T16:25:20.217012954Z" level=info msg="StartContainer for \"361d572e5954aa7493779036b65e4fde506b877fb5588bd8bc8dd33fd5392a0b\" returns successfully" Jun 25 16:25:20.469157 kubelet[2481]: E0625 16:25:20.468178 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.679069 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.680493 kubelet[2481]: W0625 16:25:20.679133 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.679174 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.679611 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.680493 kubelet[2481]: W0625 16:25:20.679629 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.679655 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.679972 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.680493 kubelet[2481]: W0625 16:25:20.679987 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.680010 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.680493 kubelet[2481]: E0625 16:25:20.680320 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.681201 kubelet[2481]: W0625 16:25:20.680335 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.681201 kubelet[2481]: E0625 16:25:20.680357 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.682212 kubelet[2481]: E0625 16:25:20.681612 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.682212 kubelet[2481]: W0625 16:25:20.681632 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.682212 kubelet[2481]: E0625 16:25:20.681654 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.682212 kubelet[2481]: E0625 16:25:20.681955 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.682212 kubelet[2481]: W0625 16:25:20.681969 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.682212 kubelet[2481]: E0625 16:25:20.681989 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.683009 kubelet[2481]: E0625 16:25:20.682724 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.683009 kubelet[2481]: W0625 16:25:20.682740 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.683009 kubelet[2481]: E0625 16:25:20.682761 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.683505 kubelet[2481]: E0625 16:25:20.683325 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.683505 kubelet[2481]: W0625 16:25:20.683342 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.683505 kubelet[2481]: E0625 16:25:20.683369 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.683947 kubelet[2481]: E0625 16:25:20.683921 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.684173 kubelet[2481]: W0625 16:25:20.684040 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.684173 kubelet[2481]: E0625 16:25:20.684067 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.684591 kubelet[2481]: E0625 16:25:20.684577 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.684693 kubelet[2481]: W0625 16:25:20.684679 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.684798 kubelet[2481]: E0625 16:25:20.684786 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.685166 kubelet[2481]: E0625 16:25:20.685152 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.685296 kubelet[2481]: W0625 16:25:20.685280 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.685393 kubelet[2481]: E0625 16:25:20.685381 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.685777 kubelet[2481]: E0625 16:25:20.685762 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.685892 kubelet[2481]: W0625 16:25:20.685876 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.686018 kubelet[2481]: E0625 16:25:20.686005 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.686538 kubelet[2481]: E0625 16:25:20.686514 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.686657 kubelet[2481]: W0625 16:25:20.686642 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.686794 kubelet[2481]: E0625 16:25:20.686776 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.687181 kubelet[2481]: E0625 16:25:20.687165 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.687360 kubelet[2481]: W0625 16:25:20.687343 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.687482 kubelet[2481]: E0625 16:25:20.687469 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.687862 kubelet[2481]: E0625 16:25:20.687848 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.688360 kubelet[2481]: W0625 16:25:20.688335 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.688495 kubelet[2481]: E0625 16:25:20.688481 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.714734 kubelet[2481]: E0625 16:25:20.714692 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.715065 kubelet[2481]: W0625 16:25:20.715031 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.715332 kubelet[2481]: E0625 16:25:20.715306 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.716186 kubelet[2481]: E0625 16:25:20.716163 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.716377 kubelet[2481]: W0625 16:25:20.716353 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.716507 kubelet[2481]: E0625 16:25:20.716494 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.717068 kubelet[2481]: E0625 16:25:20.717049 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.717200 kubelet[2481]: W0625 16:25:20.717182 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.717417 kubelet[2481]: E0625 16:25:20.717380 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.717887 kubelet[2481]: E0625 16:25:20.717870 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.717986 kubelet[2481]: W0625 16:25:20.717919 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.718077 kubelet[2481]: E0625 16:25:20.718009 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.718463 kubelet[2481]: E0625 16:25:20.718443 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.718463 kubelet[2481]: W0625 16:25:20.718462 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.718649 kubelet[2481]: E0625 16:25:20.718631 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.718797 kubelet[2481]: E0625 16:25:20.718775 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.718797 kubelet[2481]: W0625 16:25:20.718794 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.718972 kubelet[2481]: E0625 16:25:20.718953 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.722827 kubelet[2481]: E0625 16:25:20.719366 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.723049 kubelet[2481]: W0625 16:25:20.723022 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.723319 kubelet[2481]: E0625 16:25:20.723301 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.723740 kubelet[2481]: E0625 16:25:20.723723 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.723865 kubelet[2481]: W0625 16:25:20.723847 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.724113 kubelet[2481]: E0625 16:25:20.724097 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.724457 kubelet[2481]: E0625 16:25:20.724442 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.724586 kubelet[2481]: W0625 16:25:20.724572 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.724798 kubelet[2481]: E0625 16:25:20.724785 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.725143 kubelet[2481]: E0625 16:25:20.725127 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.725287 kubelet[2481]: W0625 16:25:20.725246 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.725528 kubelet[2481]: E0625 16:25:20.725512 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.725895 kubelet[2481]: E0625 16:25:20.725879 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.726043 kubelet[2481]: W0625 16:25:20.726025 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.726282 kubelet[2481]: E0625 16:25:20.726245 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.726667 kubelet[2481]: E0625 16:25:20.726652 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.726801 kubelet[2481]: W0625 16:25:20.726786 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.727085 kubelet[2481]: E0625 16:25:20.727068 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.727544 kubelet[2481]: E0625 16:25:20.727527 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.727693 kubelet[2481]: W0625 16:25:20.727674 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.727906 kubelet[2481]: E0625 16:25:20.727893 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.728576 kubelet[2481]: E0625 16:25:20.728559 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.728699 kubelet[2481]: W0625 16:25:20.728684 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.728953 kubelet[2481]: E0625 16:25:20.728925 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.729796 kubelet[2481]: E0625 16:25:20.729780 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.729939 kubelet[2481]: W0625 16:25:20.729914 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.730142 kubelet[2481]: E0625 16:25:20.730130 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.730784 kubelet[2481]: E0625 16:25:20.730768 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.730900 kubelet[2481]: W0625 16:25:20.730886 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.731109 kubelet[2481]: E0625 16:25:20.731097 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.731467 kubelet[2481]: E0625 16:25:20.731452 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.731599 kubelet[2481]: W0625 16:25:20.731584 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.731804 kubelet[2481]: E0625 16:25:20.731791 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.732116 kubelet[2481]: E0625 16:25:20.732102 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:25:20.732227 kubelet[2481]: W0625 16:25:20.732213 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:25:20.732380 kubelet[2481]: E0625 16:25:20.732354 2481 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:25:20.967986 systemd[1]: run-containerd-runc-k8s.io-361d572e5954aa7493779036b65e4fde506b877fb5588bd8bc8dd33fd5392a0b-runc.3NnTLY.mount: Deactivated successfully. Jun 25 16:25:21.168475 containerd[1398]: time="2024-06-25T16:25:21.168390753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:21.170807 containerd[1398]: time="2024-06-25T16:25:21.170694394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:25:21.173114 containerd[1398]: time="2024-06-25T16:25:21.173043981Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:21.179979 containerd[1398]: time="2024-06-25T16:25:21.178569369Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:21.182586 containerd[1398]: time="2024-06-25T16:25:21.182036672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:21.183742 containerd[1398]: time="2024-06-25T16:25:21.183676295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.228832238s" Jun 25 16:25:21.183919 containerd[1398]: time="2024-06-25T16:25:21.183753247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:25:21.195199 containerd[1398]: time="2024-06-25T16:25:21.189095720Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:25:21.225321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792073773.mount: Deactivated successfully. Jun 25 16:25:21.234498 containerd[1398]: time="2024-06-25T16:25:21.234401112Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529\"" Jun 25 16:25:21.236411 containerd[1398]: time="2024-06-25T16:25:21.236349070Z" level=info msg="StartContainer for \"eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529\"" Jun 25 16:25:21.370676 containerd[1398]: time="2024-06-25T16:25:21.370598776Z" level=info msg="StartContainer for \"eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529\" returns successfully" Jun 25 16:25:21.616983 kubelet[2481]: I0625 16:25:21.616771 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:25:21.646979 kubelet[2481]: I0625 16:25:21.646922 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5d5cf8c7fb-kq52b" podStartSLOduration=3.176077013 podCreationTimestamp="2024-06-25 16:25:16 +0000 UTC" firstStartedPulling="2024-06-25 16:25:17.482804603 +0000 UTC m=+20.276418025" lastFinishedPulling="2024-06-25 16:25:19.953577375 +0000 UTC m=+22.747190808" observedRunningTime="2024-06-25 16:25:20.630510928 +0000 UTC m=+23.424124370" watchObservedRunningTime="2024-06-25 16:25:21.646849796 +0000 UTC m=+24.440463559" Jun 25 16:25:21.967604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529-rootfs.mount: Deactivated successfully. Jun 25 16:25:22.078286 containerd[1398]: time="2024-06-25T16:25:22.078151981Z" level=info msg="shim disconnected" id=eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529 namespace=k8s.io Jun 25 16:25:22.078666 containerd[1398]: time="2024-06-25T16:25:22.078630436Z" level=warning msg="cleaning up after shim disconnected" id=eb42bcd466b17a1cbfc993ece3baf6990257df5811333bde3f92b06f59ae0529 namespace=k8s.io Jun 25 16:25:22.078802 containerd[1398]: time="2024-06-25T16:25:22.078778337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:25:22.466698 kubelet[2481]: E0625 16:25:22.466626 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:22.622293 containerd[1398]: time="2024-06-25T16:25:22.622150271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:25:23.879318 kubelet[2481]: I0625 16:25:23.879018 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:25:23.979720 kernel: kauditd_printk_skb: 8 callbacks suppressed Jun 25 16:25:23.980024 kernel: audit: type=1325 audit(1719332723.956:262): table=filter:95 family=2 entries=15 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:23.956000 audit[3137]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:23.956000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe3967f4c0 a2=0 a3=7ffe3967f4ac items=0 ppid=2636 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:24.019337 kernel: audit: type=1300 audit(1719332723.956:262): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe3967f4c0 a2=0 a3=7ffe3967f4ac items=0 ppid=2636 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:23.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:24.038356 kernel: audit: type=1327 audit(1719332723.956:262): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:23.983000 audit[3137]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:24.082837 kernel: audit: type=1325 audit(1719332723.983:263): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:24.083079 kernel: audit: type=1300 audit(1719332723.983:263): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe3967f4c0 a2=0 a3=7ffe3967f4ac items=0 ppid=2636 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:23.983000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe3967f4c0 a2=0 a3=7ffe3967f4ac items=0 ppid=2636 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:23.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:24.104067 kernel: audit: type=1327 audit(1719332723.983:263): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:24.469783 kubelet[2481]: E0625 16:25:24.469739 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:26.466329 kubelet[2481]: E0625 16:25:26.466272 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:27.303735 containerd[1398]: time="2024-06-25T16:25:27.303656848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:27.305468 containerd[1398]: time="2024-06-25T16:25:27.305390750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:25:27.307739 containerd[1398]: time="2024-06-25T16:25:27.307679721Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:27.310028 containerd[1398]: time="2024-06-25T16:25:27.309984899Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:27.313000 containerd[1398]: time="2024-06-25T16:25:27.312954281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:27.317171 containerd[1398]: time="2024-06-25T16:25:27.314958147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.692730679s" Jun 25 16:25:27.317171 containerd[1398]: time="2024-06-25T16:25:27.315017237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:25:27.324617 containerd[1398]: time="2024-06-25T16:25:27.324564787Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:25:27.355638 containerd[1398]: time="2024-06-25T16:25:27.355561608Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844\"" Jun 25 16:25:27.358936 containerd[1398]: time="2024-06-25T16:25:27.356871332Z" level=info msg="StartContainer for \"104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844\"" Jun 25 16:25:27.419536 systemd[1]: run-containerd-runc-k8s.io-104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844-runc.xYtZwA.mount: Deactivated successfully. Jun 25 16:25:27.486178 containerd[1398]: time="2024-06-25T16:25:27.486095972Z" level=info msg="StartContainer for \"104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844\" returns successfully" Jun 25 16:25:28.268390 containerd[1398]: time="2024-06-25T16:25:28.268315640Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:25:28.308790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844-rootfs.mount: Deactivated successfully. Jun 25 16:25:28.319136 kubelet[2481]: I0625 16:25:28.317073 2481 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:25:28.352576 kubelet[2481]: I0625 16:25:28.352501 2481 topology_manager.go:215] "Topology Admit Handler" podUID="c210fa31-ccfa-431b-9325-067eea977ba2" podNamespace="kube-system" podName="coredns-5dd5756b68-b9hjd" Jun 25 16:25:28.364302 kubelet[2481]: I0625 16:25:28.363806 2481 topology_manager.go:215] "Topology Admit Handler" podUID="b182dc22-0381-43d8-b490-0a8a7b490243" podNamespace="calico-system" podName="calico-kube-controllers-68bf9c55bd-bl5gf" Jun 25 16:25:28.366898 kubelet[2481]: I0625 16:25:28.366854 2481 topology_manager.go:215] "Topology Admit Handler" podUID="50a27c96-8776-4abc-85f1-1753c70aac48" podNamespace="kube-system" podName="coredns-5dd5756b68-jmv4k" Jun 25 16:25:28.398314 kubelet[2481]: I0625 16:25:28.398218 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c210fa31-ccfa-431b-9325-067eea977ba2-config-volume\") pod \"coredns-5dd5756b68-b9hjd\" (UID: \"c210fa31-ccfa-431b-9325-067eea977ba2\") " pod="kube-system/coredns-5dd5756b68-b9hjd" Jun 25 16:25:28.399041 kubelet[2481]: I0625 16:25:28.399009 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpn4w\" (UniqueName: \"kubernetes.io/projected/c210fa31-ccfa-431b-9325-067eea977ba2-kube-api-access-tpn4w\") pod \"coredns-5dd5756b68-b9hjd\" (UID: \"c210fa31-ccfa-431b-9325-067eea977ba2\") " pod="kube-system/coredns-5dd5756b68-b9hjd" Jun 25 16:25:28.471304 containerd[1398]: time="2024-06-25T16:25:28.470706121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p75ks,Uid:0f943ce0-f09c-40ca-9640-4ebcb02d1c9f,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:28.501152 kubelet[2481]: I0625 16:25:28.501077 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b182dc22-0381-43d8-b490-0a8a7b490243-tigera-ca-bundle\") pod \"calico-kube-controllers-68bf9c55bd-bl5gf\" (UID: \"b182dc22-0381-43d8-b490-0a8a7b490243\") " pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" Jun 25 16:25:28.505077 kubelet[2481]: I0625 16:25:28.501216 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh9p7\" (UniqueName: \"kubernetes.io/projected/b182dc22-0381-43d8-b490-0a8a7b490243-kube-api-access-sh9p7\") pod \"calico-kube-controllers-68bf9c55bd-bl5gf\" (UID: \"b182dc22-0381-43d8-b490-0a8a7b490243\") " pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" Jun 25 16:25:28.505077 kubelet[2481]: I0625 16:25:28.501350 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnh7x\" (UniqueName: \"kubernetes.io/projected/50a27c96-8776-4abc-85f1-1753c70aac48-kube-api-access-wnh7x\") pod \"coredns-5dd5756b68-jmv4k\" (UID: \"50a27c96-8776-4abc-85f1-1753c70aac48\") " pod="kube-system/coredns-5dd5756b68-jmv4k" Jun 25 16:25:28.505077 kubelet[2481]: I0625 16:25:28.501390 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50a27c96-8776-4abc-85f1-1753c70aac48-config-volume\") pod \"coredns-5dd5756b68-jmv4k\" (UID: \"50a27c96-8776-4abc-85f1-1753c70aac48\") " pod="kube-system/coredns-5dd5756b68-jmv4k" Jun 25 16:25:28.670371 containerd[1398]: time="2024-06-25T16:25:28.670219531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b9hjd,Uid:c210fa31-ccfa-431b-9325-067eea977ba2,Namespace:kube-system,Attempt:0,}" Jun 25 16:25:28.707760 containerd[1398]: time="2024-06-25T16:25:28.707681124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bf9c55bd-bl5gf,Uid:b182dc22-0381-43d8-b490-0a8a7b490243,Namespace:calico-system,Attempt:0,}" Jun 25 16:25:28.708355 containerd[1398]: time="2024-06-25T16:25:28.708305155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jmv4k,Uid:50a27c96-8776-4abc-85f1-1753c70aac48,Namespace:kube-system,Attempt:0,}" Jun 25 16:25:29.054457 containerd[1398]: time="2024-06-25T16:25:29.053846312Z" level=info msg="shim disconnected" id=104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844 namespace=k8s.io Jun 25 16:25:29.054457 containerd[1398]: time="2024-06-25T16:25:29.053934618Z" level=warning msg="cleaning up after shim disconnected" id=104fad63af249b4b56a7bc0651a7da15cefa747f7becb2f8948e7422b5288844 namespace=k8s.io Jun 25 16:25:29.054457 containerd[1398]: time="2024-06-25T16:25:29.053952317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:25:29.261493 containerd[1398]: time="2024-06-25T16:25:29.261389304Z" level=error msg="Failed to destroy network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.262390 containerd[1398]: time="2024-06-25T16:25:29.262324452Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.262665 containerd[1398]: time="2024-06-25T16:25:29.262615914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p75ks,Uid:0f943ce0-f09c-40ca-9640-4ebcb02d1c9f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.264543 kubelet[2481]: E0625 16:25:29.264500 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.264727 kubelet[2481]: E0625 16:25:29.264637 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:29.264727 kubelet[2481]: E0625 16:25:29.264702 2481 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-p75ks" Jun 25 16:25:29.267869 kubelet[2481]: E0625 16:25:29.266800 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-p75ks_calico-system(0f943ce0-f09c-40ca-9640-4ebcb02d1c9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-p75ks_calico-system(0f943ce0-f09c-40ca-9640-4ebcb02d1c9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:29.314298 containerd[1398]: time="2024-06-25T16:25:29.313364683Z" level=error msg="Failed to destroy network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.321485 containerd[1398]: time="2024-06-25T16:25:29.317283300Z" level=error msg="encountered an error cleaning up failed sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.321485 containerd[1398]: time="2024-06-25T16:25:29.317422729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b9hjd,Uid:c210fa31-ccfa-431b-9325-067eea977ba2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.321677 kubelet[2481]: E0625 16:25:29.317771 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.321677 kubelet[2481]: E0625 16:25:29.317854 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-b9hjd" Jun 25 16:25:29.321677 kubelet[2481]: E0625 16:25:29.317906 2481 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-b9hjd" Jun 25 16:25:29.323136 kubelet[2481]: E0625 16:25:29.317996 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-b9hjd_kube-system(c210fa31-ccfa-431b-9325-067eea977ba2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-b9hjd_kube-system(c210fa31-ccfa-431b-9325-067eea977ba2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-b9hjd" podUID="c210fa31-ccfa-431b-9325-067eea977ba2" Jun 25 16:25:29.325521 containerd[1398]: time="2024-06-25T16:25:29.325460408Z" level=error msg="Failed to destroy network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.326295 containerd[1398]: time="2024-06-25T16:25:29.326208130Z" level=error msg="encountered an error cleaning up failed sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.326637 containerd[1398]: time="2024-06-25T16:25:29.326575934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bf9c55bd-bl5gf,Uid:b182dc22-0381-43d8-b490-0a8a7b490243,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.327372 kubelet[2481]: E0625 16:25:29.327292 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.329127 kubelet[2481]: E0625 16:25:29.327574 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" Jun 25 16:25:29.329127 kubelet[2481]: E0625 16:25:29.327626 2481 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" Jun 25 16:25:29.329127 kubelet[2481]: E0625 16:25:29.327716 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68bf9c55bd-bl5gf_calico-system(b182dc22-0381-43d8-b490-0a8a7b490243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68bf9c55bd-bl5gf_calico-system(b182dc22-0381-43d8-b490-0a8a7b490243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" podUID="b182dc22-0381-43d8-b490-0a8a7b490243" Jun 25 16:25:29.339700 containerd[1398]: time="2024-06-25T16:25:29.339618205Z" level=error msg="Failed to destroy network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.340237 containerd[1398]: time="2024-06-25T16:25:29.340164835Z" level=error msg="encountered an error cleaning up failed sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.340377 containerd[1398]: time="2024-06-25T16:25:29.340271800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jmv4k,Uid:50a27c96-8776-4abc-85f1-1753c70aac48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.340711 kubelet[2481]: E0625 16:25:29.340645 2481 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.340851 kubelet[2481]: E0625 16:25:29.340718 2481 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jmv4k" Jun 25 16:25:29.340851 kubelet[2481]: E0625 16:25:29.340750 2481 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jmv4k" Jun 25 16:25:29.341043 kubelet[2481]: E0625 16:25:29.341002 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-jmv4k_kube-system(50a27c96-8776-4abc-85f1-1753c70aac48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-jmv4k_kube-system(50a27c96-8776-4abc-85f1-1753c70aac48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jmv4k" podUID="50a27c96-8776-4abc-85f1-1753c70aac48" Jun 25 16:25:29.518435 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b-shm.mount: Deactivated successfully. Jun 25 16:25:29.659351 containerd[1398]: time="2024-06-25T16:25:29.659237380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:25:29.662818 kubelet[2481]: I0625 16:25:29.662154 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:29.663772 containerd[1398]: time="2024-06-25T16:25:29.663734116Z" level=info msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" Jun 25 16:25:29.666002 containerd[1398]: time="2024-06-25T16:25:29.665968572Z" level=info msg="Ensure that sandbox 501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5 in task-service has been cleanup successfully" Jun 25 16:25:29.668531 kubelet[2481]: I0625 16:25:29.668420 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:29.670148 containerd[1398]: time="2024-06-25T16:25:29.670105168Z" level=info msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" Jun 25 16:25:29.670868 containerd[1398]: time="2024-06-25T16:25:29.670837278Z" level=info msg="Ensure that sandbox 9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b in task-service has been cleanup successfully" Jun 25 16:25:29.672715 kubelet[2481]: I0625 16:25:29.672556 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:29.673442 containerd[1398]: time="2024-06-25T16:25:29.673398133Z" level=info msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" Jun 25 16:25:29.673927 containerd[1398]: time="2024-06-25T16:25:29.673881114Z" level=info msg="Ensure that sandbox 1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad in task-service has been cleanup successfully" Jun 25 16:25:29.676288 kubelet[2481]: I0625 16:25:29.676016 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:29.676891 containerd[1398]: time="2024-06-25T16:25:29.676842195Z" level=info msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" Jun 25 16:25:29.677399 containerd[1398]: time="2024-06-25T16:25:29.677370913Z" level=info msg="Ensure that sandbox d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6 in task-service has been cleanup successfully" Jun 25 16:25:29.775019 containerd[1398]: time="2024-06-25T16:25:29.774924517Z" level=error msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" failed" error="failed to destroy network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.775934 kubelet[2481]: E0625 16:25:29.775603 2481 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:29.775934 kubelet[2481]: E0625 16:25:29.775744 2481 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5"} Jun 25 16:25:29.775934 kubelet[2481]: E0625 16:25:29.775824 2481 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50a27c96-8776-4abc-85f1-1753c70aac48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:29.775934 kubelet[2481]: E0625 16:25:29.775902 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50a27c96-8776-4abc-85f1-1753c70aac48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jmv4k" podUID="50a27c96-8776-4abc-85f1-1753c70aac48" Jun 25 16:25:29.800935 containerd[1398]: time="2024-06-25T16:25:29.800849324Z" level=error msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" failed" error="failed to destroy network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.801899 kubelet[2481]: E0625 16:25:29.801581 2481 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:29.801899 kubelet[2481]: E0625 16:25:29.801659 2481 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6"} Jun 25 16:25:29.801899 kubelet[2481]: E0625 16:25:29.801736 2481 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b182dc22-0381-43d8-b490-0a8a7b490243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:29.801899 kubelet[2481]: E0625 16:25:29.801804 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b182dc22-0381-43d8-b490-0a8a7b490243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" podUID="b182dc22-0381-43d8-b490-0a8a7b490243" Jun 25 16:25:29.806439 containerd[1398]: time="2024-06-25T16:25:29.806367787Z" level=error msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" failed" error="failed to destroy network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.807250 kubelet[2481]: E0625 16:25:29.806937 2481 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:29.807250 kubelet[2481]: E0625 16:25:29.807043 2481 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad"} Jun 25 16:25:29.807250 kubelet[2481]: E0625 16:25:29.807131 2481 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c210fa31-ccfa-431b-9325-067eea977ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:29.807250 kubelet[2481]: E0625 16:25:29.807199 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c210fa31-ccfa-431b-9325-067eea977ba2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-b9hjd" podUID="c210fa31-ccfa-431b-9325-067eea977ba2" Jun 25 16:25:29.809016 containerd[1398]: time="2024-06-25T16:25:29.808924755Z" level=error msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" failed" error="failed to destroy network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:25:29.809341 kubelet[2481]: E0625 16:25:29.809316 2481 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:29.809462 kubelet[2481]: E0625 16:25:29.809362 2481 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b"} Jun 25 16:25:29.809462 kubelet[2481]: E0625 16:25:29.809413 2481 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:25:29.809462 kubelet[2481]: E0625 16:25:29.809458 2481 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-p75ks" podUID="0f943ce0-f09c-40ca-9640-4ebcb02d1c9f" Jun 25 16:25:35.566018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008860269.mount: Deactivated successfully. Jun 25 16:25:35.599479 containerd[1398]: time="2024-06-25T16:25:35.599394860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:35.601154 containerd[1398]: time="2024-06-25T16:25:35.601063312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:25:35.602796 containerd[1398]: time="2024-06-25T16:25:35.602728252Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:35.605515 containerd[1398]: time="2024-06-25T16:25:35.605472182Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:35.608073 containerd[1398]: time="2024-06-25T16:25:35.608031957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:35.609320 containerd[1398]: time="2024-06-25T16:25:35.609273815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.949154318s" Jun 25 16:25:35.609511 containerd[1398]: time="2024-06-25T16:25:35.609477876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:25:35.637503 containerd[1398]: time="2024-06-25T16:25:35.637428297Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:25:35.666274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794486387.mount: Deactivated successfully. Jun 25 16:25:35.670294 containerd[1398]: time="2024-06-25T16:25:35.670206276Z" level=info msg="CreateContainer within sandbox \"c68bdb204b11db746f343530ac1d393eb275917f2916fddb4efef4650d9d2457\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75\"" Jun 25 16:25:35.673746 containerd[1398]: time="2024-06-25T16:25:35.673601681Z" level=info msg="StartContainer for \"a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75\"" Jun 25 16:25:35.765748 containerd[1398]: time="2024-06-25T16:25:35.765674376Z" level=info msg="StartContainer for \"a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75\" returns successfully" Jun 25 16:25:35.886667 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:25:35.886945 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:25:36.751231 kubelet[2481]: I0625 16:25:36.750427 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-s9m7z" podStartSLOduration=2.666336231 podCreationTimestamp="2024-06-25 16:25:16 +0000 UTC" firstStartedPulling="2024-06-25 16:25:17.525957272 +0000 UTC m=+20.319570698" lastFinishedPulling="2024-06-25 16:25:35.609980962 +0000 UTC m=+38.403594392" observedRunningTime="2024-06-25 16:25:36.749904074 +0000 UTC m=+39.543517514" watchObservedRunningTime="2024-06-25 16:25:36.750359925 +0000 UTC m=+39.543973403" Jun 25 16:25:36.832196 systemd[1]: run-containerd-runc-k8s.io-a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75-runc.EphhXQ.mount: Deactivated successfully. Jun 25 16:25:37.392000 audit[3545]: AVC avc: denied { write } for pid=3545 comm="tee" name="fd" dev="proc" ino=23483 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.414286 kernel: audit: type=1400 audit(1719332737.392:264): avc: denied { write } for pid=3545 comm="tee" name="fd" dev="proc" ino=23483 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.392000 audit[3545]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcde5509c4 a2=241 a3=1b6 items=1 ppid=3521 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.460292 kernel: audit: type=1300 audit(1719332737.392:264): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcde5509c4 a2=241 a3=1b6 items=1 ppid=3521 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.460468 kernel: audit: type=1307 audit(1719332737.392:264): cwd="/etc/service/enabled/bird6/log" Jun 25 16:25:37.392000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:25:37.392000 audit: PATH item=0 name="/dev/fd/63" inode=24416 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.512373 kernel: audit: type=1302 audit(1719332737.392:264): item=0 name="/dev/fd/63" inode=24416 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.512558 kernel: audit: type=1327 audit(1719332737.392:264): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.452000 audit[3564]: AVC avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=24452 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.534346 kernel: audit: type=1400 audit(1719332737.452:265): avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=24452 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.565367 kernel: audit: type=1300 audit(1719332737.452:265): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3b39a9c4 a2=241 a3=1b6 items=1 ppid=3522 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.452000 audit[3564]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3b39a9c4 a2=241 a3=1b6 items=1 ppid=3522 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.452000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:25:37.574399 kernel: audit: type=1307 audit(1719332737.452:265): cwd="/etc/service/enabled/confd/log" Jun 25 16:25:37.574551 kernel: audit: type=1302 audit(1719332737.452:265): item=0 name="/dev/fd/63" inode=24440 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.452000 audit: PATH item=0 name="/dev/fd/63" inode=24440 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.615226 kernel: audit: type=1327 audit(1719332737.452:265): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.454000 audit[3566]: AVC avc: denied { write } for pid=3566 comm="tee" name="fd" dev="proc" ino=24456 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.454000 audit[3566]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3420d9b5 a2=241 a3=1b6 items=1 ppid=3515 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.454000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:25:37.454000 audit: PATH item=0 name="/dev/fd/63" inode=24441 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.456000 audit[3559]: AVC avc: denied { write } for pid=3559 comm="tee" name="fd" dev="proc" ino=24460 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.456000 audit[3559]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd677fa9c4 a2=241 a3=1b6 items=1 ppid=3518 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.456000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:25:37.456000 audit: PATH item=0 name="/dev/fd/63" inode=24437 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.456000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.471000 audit[3581]: AVC avc: denied { write } for pid=3581 comm="tee" name="fd" dev="proc" ino=23487 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.471000 audit[3581]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb40599c5 a2=241 a3=1b6 items=1 ppid=3517 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.471000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:25:37.471000 audit: PATH item=0 name="/dev/fd/63" inode=24464 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.470000 audit[3571]: AVC avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=24469 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.470000 audit[3571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed01a79c6 a2=241 a3=1b6 items=1 ppid=3512 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.470000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:25:37.470000 audit: PATH item=0 name="/dev/fd/63" inode=24442 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.490000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=24473 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:25:37.490000 audit[3577]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe3dbac9b4 a2=241 a3=1b6 items=1 ppid=3530 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:37.490000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:25:37.490000 audit: PATH item=0 name="/dev/fd/63" inode=24447 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:37.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:25:37.770295 systemd[1]: run-containerd-runc-k8s.io-a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75-runc.k6qOll.mount: Deactivated successfully. Jun 25 16:25:38.140627 systemd-networkd[1152]: vxlan.calico: Link UP Jun 25 16:25:38.140639 systemd-networkd[1152]: vxlan.calico: Gained carrier Jun 25 16:25:38.190000 audit: BPF prog-id=10 op=LOAD Jun 25 16:25:38.190000 audit[3672]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0f0822c0 a2=70 a3=7fbf1ad7b000 items=0 ppid=3520 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.190000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:38.191000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:25:38.192000 audit: BPF prog-id=11 op=LOAD Jun 25 16:25:38.192000 audit[3672]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0f0822c0 a2=70 a3=6f items=0 ppid=3520 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:38.192000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:25:38.192000 audit: BPF prog-id=12 op=LOAD Jun 25 16:25:38.192000 audit[3672]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0f082250 a2=70 a3=7ffc0f0822c0 items=0 ppid=3520 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.192000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:38.192000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:25:38.194000 audit: BPF prog-id=13 op=LOAD Jun 25 16:25:38.194000 audit[3672]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc0f082280 a2=70 a3=0 items=0 ppid=3520 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.194000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:25:38.219000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:25:38.219000 audit[3678]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7f2441379de0 a2=80000 a3=0 items=1 ppid=3520 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip" exe="/usr/sbin/ip" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.219000 audit: CWD cwd="/etc/service/enabled/felix" Jun 25 16:25:38.219000 audit: PATH item=0 name="/lib64/libz.so.1" inode=523196 dev=00:cb mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:var_lib_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:25:38.219000 audit: PROCTITLE proctitle=6970006C696E6B0064656C0063616C69636F5F746D705F41 Jun 25 16:25:38.330000 audit[3702]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:38.330000 audit[3702]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe8462ae10 a2=0 a3=7ffe8462adfc items=0 ppid=3520 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.330000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:38.336000 audit[3700]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3700 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:38.336000 audit[3700]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff86042840 a2=0 a3=7fff8604282c items=0 ppid=3520 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:38.338000 audit[3701]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3701 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:38.338000 audit[3701]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffefcc6cff0 a2=0 a3=7ffefcc6cfdc items=0 ppid=3520 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.338000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:38.341000 audit[3703]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:38.341000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffdc30f5e10 a2=0 a3=7ffdc30f5dfc items=0 ppid=3520 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:38.341000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:39.410892 systemd-networkd[1152]: vxlan.calico: Gained IPv6LL Jun 25 16:25:41.469707 containerd[1398]: time="2024-06-25T16:25:41.469551952Z" level=info msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.529 [INFO][3730] k8s.go 608: Cleaning up netns ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.532 [INFO][3730] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" iface="eth0" netns="/var/run/netns/cni-6260d0f8-9577-c755-2881-c868110d8b5d" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.532 [INFO][3730] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" iface="eth0" netns="/var/run/netns/cni-6260d0f8-9577-c755-2881-c868110d8b5d" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.532 [INFO][3730] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" iface="eth0" netns="/var/run/netns/cni-6260d0f8-9577-c755-2881-c868110d8b5d" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.532 [INFO][3730] k8s.go 615: Releasing IP address(es) ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.532 [INFO][3730] utils.go 188: Calico CNI releasing IP address ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.558 [INFO][3736] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.559 [INFO][3736] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.559 [INFO][3736] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.566 [WARNING][3736] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.566 [INFO][3736] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.569 [INFO][3736] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:41.572753 containerd[1398]: 2024-06-25 16:25:41.571 [INFO][3730] k8s.go 621: Teardown processing complete. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:41.578737 systemd[1]: run-netns-cni\x2d6260d0f8\x2d9577\x2dc755\x2d2881\x2dc868110d8b5d.mount: Deactivated successfully. Jun 25 16:25:41.580938 containerd[1398]: time="2024-06-25T16:25:41.580876145Z" level=info msg="TearDown network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" successfully" Jun 25 16:25:41.581112 containerd[1398]: time="2024-06-25T16:25:41.581083683Z" level=info msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" returns successfully" Jun 25 16:25:41.582414 containerd[1398]: time="2024-06-25T16:25:41.582375274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bf9c55bd-bl5gf,Uid:b182dc22-0381-43d8-b490-0a8a7b490243,Namespace:calico-system,Attempt:1,}" Jun 25 16:25:41.744297 systemd-networkd[1152]: cali41a6fd8d7cb: Link UP Jun 25 16:25:41.759899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:41.760392 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali41a6fd8d7cb: link becomes ready Jun 25 16:25:41.761764 systemd-networkd[1152]: cali41a6fd8d7cb: Gained carrier Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.653 [INFO][3744] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0 calico-kube-controllers-68bf9c55bd- calico-system b182dc22-0381-43d8-b490-0a8a7b490243 709 0 2024-06-25 16:25:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68bf9c55bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal calico-kube-controllers-68bf9c55bd-bl5gf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali41a6fd8d7cb [] []}} ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.653 [INFO][3744] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.697 [INFO][3756] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" HandleID="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.707 [INFO][3756] ipam_plugin.go 264: Auto assigning IP ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" HandleID="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-68bf9c55bd-bl5gf", "timestamp":"2024-06-25 16:25:41.697487551 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.707 [INFO][3756] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.708 [INFO][3756] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.708 [INFO][3756] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.710 [INFO][3756] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.714 [INFO][3756] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.719 [INFO][3756] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.721 [INFO][3756] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.724 [INFO][3756] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.724 [INFO][3756] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.725 [INFO][3756] ipam.go 1685: Creating new handle: k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7 Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.730 [INFO][3756] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.737 [INFO][3756] ipam.go 1216: Successfully claimed IPs: [192.168.4.65/26] block=192.168.4.64/26 handle="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.737 [INFO][3756] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.65/26] handle="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.737 [INFO][3756] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:41.779077 containerd[1398]: 2024-06-25 16:25:41.737 [INFO][3756] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.65/26] IPv6=[] ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" HandleID="k8s-pod-network.8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.740 [INFO][3744] k8s.go 386: Populated endpoint ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0", GenerateName:"calico-kube-controllers-68bf9c55bd-", Namespace:"calico-system", SelfLink:"", UID:"b182dc22-0381-43d8-b490-0a8a7b490243", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bf9c55bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-68bf9c55bd-bl5gf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a6fd8d7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.740 [INFO][3744] k8s.go 387: Calico CNI using IPs: [192.168.4.65/32] ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.740 [INFO][3744] dataplane_linux.go 68: Setting the host side veth name to cali41a6fd8d7cb ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.760 [INFO][3744] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.763 [INFO][3744] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0", GenerateName:"calico-kube-controllers-68bf9c55bd-", Namespace:"calico-system", SelfLink:"", UID:"b182dc22-0381-43d8-b490-0a8a7b490243", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bf9c55bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7", Pod:"calico-kube-controllers-68bf9c55bd-bl5gf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a6fd8d7cb", MAC:"e2:e1:3e:0b:72:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:41.780322 containerd[1398]: 2024-06-25 16:25:41.773 [INFO][3744] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7" Namespace="calico-system" Pod="calico-kube-controllers-68bf9c55bd-bl5gf" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:41.811000 audit[3777]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3777 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:41.811000 audit[3777]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffc0aadfe30 a2=0 a3=7ffc0aadfe1c items=0 ppid=3520 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:41.811000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:41.823326 containerd[1398]: time="2024-06-25T16:25:41.823056342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:41.823326 containerd[1398]: time="2024-06-25T16:25:41.823139716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:41.823326 containerd[1398]: time="2024-06-25T16:25:41.823165926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:41.823326 containerd[1398]: time="2024-06-25T16:25:41.823183118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:41.921284 containerd[1398]: time="2024-06-25T16:25:41.921199264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bf9c55bd-bl5gf,Uid:b182dc22-0381-43d8-b490-0a8a7b490243,Namespace:calico-system,Attempt:1,} returns sandbox id \"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7\"" Jun 25 16:25:41.924613 containerd[1398]: time="2024-06-25T16:25:41.924565721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:25:42.467899 containerd[1398]: time="2024-06-25T16:25:42.467844645Z" level=info msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" Jun 25 16:25:42.469921 containerd[1398]: time="2024-06-25T16:25:42.469863745Z" level=info msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" Jun 25 16:25:42.578788 systemd[1]: run-containerd-runc-k8s.io-8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7-runc.RqaM7W.mount: Deactivated successfully. Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.554 [INFO][3853] k8s.go 608: Cleaning up netns ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3853] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" iface="eth0" netns="/var/run/netns/cni-516142e7-9ac7-aca5-7e57-a2362547619c" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3853] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" iface="eth0" netns="/var/run/netns/cni-516142e7-9ac7-aca5-7e57-a2362547619c" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3853] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" iface="eth0" netns="/var/run/netns/cni-516142e7-9ac7-aca5-7e57-a2362547619c" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3853] k8s.go 615: Releasing IP address(es) ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3853] utils.go 188: Calico CNI releasing IP address ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.609 [INFO][3864] ipam_plugin.go 411: Releasing address using handleID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.612 [INFO][3864] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.613 [INFO][3864] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.621 [WARNING][3864] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.621 [INFO][3864] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.623 [INFO][3864] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:42.628445 containerd[1398]: 2024-06-25 16:25:42.624 [INFO][3853] k8s.go 621: Teardown processing complete. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:42.633311 systemd[1]: run-netns-cni\x2d516142e7\x2d9ac7\x2daca5\x2d7e57\x2da2362547619c.mount: Deactivated successfully. Jun 25 16:25:42.635642 containerd[1398]: time="2024-06-25T16:25:42.635505846Z" level=info msg="TearDown network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" successfully" Jun 25 16:25:42.635642 containerd[1398]: time="2024-06-25T16:25:42.635560858Z" level=info msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" returns successfully" Jun 25 16:25:42.636758 containerd[1398]: time="2024-06-25T16:25:42.636711300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b9hjd,Uid:c210fa31-ccfa-431b-9325-067eea977ba2,Namespace:kube-system,Attempt:1,}" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3848] k8s.go 608: Cleaning up netns ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.555 [INFO][3848] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" iface="eth0" netns="/var/run/netns/cni-8543fc47-0a10-ba87-8c02-3688ff219203" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.556 [INFO][3848] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" iface="eth0" netns="/var/run/netns/cni-8543fc47-0a10-ba87-8c02-3688ff219203" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.556 [INFO][3848] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" iface="eth0" netns="/var/run/netns/cni-8543fc47-0a10-ba87-8c02-3688ff219203" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.556 [INFO][3848] k8s.go 615: Releasing IP address(es) ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.556 [INFO][3848] utils.go 188: Calico CNI releasing IP address ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.616 [INFO][3865] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.617 [INFO][3865] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.623 [INFO][3865] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.637 [WARNING][3865] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.638 [INFO][3865] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.640 [INFO][3865] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:42.644400 containerd[1398]: 2024-06-25 16:25:42.642 [INFO][3848] k8s.go 621: Teardown processing complete. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:42.650385 containerd[1398]: time="2024-06-25T16:25:42.650339460Z" level=info msg="TearDown network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" successfully" Jun 25 16:25:42.650507 containerd[1398]: time="2024-06-25T16:25:42.650483317Z" level=info msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" returns successfully" Jun 25 16:25:42.651419 containerd[1398]: time="2024-06-25T16:25:42.651382027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p75ks,Uid:0f943ce0-f09c-40ca-9640-4ebcb02d1c9f,Namespace:calico-system,Attempt:1,}" Jun 25 16:25:42.651729 systemd[1]: run-netns-cni\x2d8543fc47\x2d0a10\x2dba87\x2d8c02\x2d3688ff219203.mount: Deactivated successfully. Jun 25 16:25:42.915336 systemd-networkd[1152]: cali609de883c81: Link UP Jun 25 16:25:42.923649 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:42.932294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali609de883c81: link becomes ready Jun 25 16:25:42.935120 systemd-networkd[1152]: cali609de883c81: Gained carrier Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.760 [INFO][3877] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0 coredns-5dd5756b68- kube-system c210fa31-ccfa-431b-9325-067eea977ba2 719 0 2024-06-25 16:25:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal coredns-5dd5756b68-b9hjd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali609de883c81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.760 [INFO][3877] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.844 [INFO][3897] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" HandleID="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.861 [INFO][3897] ipam_plugin.go 264: Auto assigning IP ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" HandleID="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312500), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"coredns-5dd5756b68-b9hjd", "timestamp":"2024-06-25 16:25:42.839793123 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.862 [INFO][3897] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.863 [INFO][3897] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.863 [INFO][3897] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.865 [INFO][3897] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.870 [INFO][3897] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.875 [INFO][3897] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.878 [INFO][3897] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.886 [INFO][3897] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.886 [INFO][3897] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.888 [INFO][3897] ipam.go 1685: Creating new handle: k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.896 [INFO][3897] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3897] ipam.go 1216: Successfully claimed IPs: [192.168.4.66/26] block=192.168.4.64/26 handle="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3897] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.66/26] handle="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3897] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:42.968336 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3897] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.66/26] IPv6=[] ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" HandleID="k8s-pod-network.73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.910 [INFO][3877] k8s.go 386: Populated endpoint ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c210fa31-ccfa-431b-9325-067eea977ba2", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-5dd5756b68-b9hjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609de883c81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.911 [INFO][3877] k8s.go 387: Calico CNI using IPs: [192.168.4.66/32] ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.911 [INFO][3877] dataplane_linux.go 68: Setting the host side veth name to cali609de883c81 ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.940 [INFO][3877] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.941 [INFO][3877] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c210fa31-ccfa-431b-9325-067eea977ba2", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed", Pod:"coredns-5dd5756b68-b9hjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609de883c81", MAC:"1e:64:a8:a8:40:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:42.969570 containerd[1398]: 2024-06-25 16:25:42.963 [INFO][3877] k8s.go 500: Wrote updated endpoint to datastore ContainerID="73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed" Namespace="kube-system" Pod="coredns-5dd5756b68-b9hjd" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:42.996439 systemd-networkd[1152]: cali244e5e5ec67: Link UP Jun 25 16:25:43.003587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali244e5e5ec67: link becomes ready Jun 25 16:25:43.003640 systemd-networkd[1152]: cali244e5e5ec67: Gained carrier Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.782 [INFO][3881] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0 csi-node-driver- calico-system 0f943ce0-f09c-40ca-9640-4ebcb02d1c9f 720 0 2024-06-25 16:25:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal csi-node-driver-p75ks eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali244e5e5ec67 [] []}} ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.783 [INFO][3881] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.878 [INFO][3904] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" HandleID="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.894 [INFO][3904] ipam_plugin.go 264: Auto assigning IP ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" HandleID="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035c5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"csi-node-driver-p75ks", "timestamp":"2024-06-25 16:25:42.878297178 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.894 [INFO][3904] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3904] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.905 [INFO][3904] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.908 [INFO][3904] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.945 [INFO][3904] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.951 [INFO][3904] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.964 [INFO][3904] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.971 [INFO][3904] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.971 [INFO][3904] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.973 [INFO][3904] ipam.go 1685: Creating new handle: k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4 Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.979 [INFO][3904] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.985 [INFO][3904] ipam.go 1216: Successfully claimed IPs: [192.168.4.67/26] block=192.168.4.64/26 handle="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.985 [INFO][3904] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.67/26] handle="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.985 [INFO][3904] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:43.029526 containerd[1398]: 2024-06-25 16:25:42.985 [INFO][3904] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.67/26] IPv6=[] ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" HandleID="k8s-pod-network.168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:42.988 [INFO][3881] k8s.go 386: Populated endpoint ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-p75ks", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali244e5e5ec67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:42.988 [INFO][3881] k8s.go 387: Calico CNI using IPs: [192.168.4.67/32] ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:42.988 [INFO][3881] dataplane_linux.go 68: Setting the host side veth name to cali244e5e5ec67 ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:43.005 [INFO][3881] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:43.012 [INFO][3881] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4", Pod:"csi-node-driver-p75ks", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali244e5e5ec67", MAC:"3e:dc:98:64:2b:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:43.031181 containerd[1398]: 2024-06-25 16:25:43.027 [INFO][3881] k8s.go 500: Wrote updated endpoint to datastore ContainerID="168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4" Namespace="calico-system" Pod="csi-node-driver-p75ks" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:43.078105 kernel: kauditd_printk_skb: 60 callbacks suppressed Jun 25 16:25:43.078283 kernel: audit: type=1325 audit(1719332743.055:284): table=filter:102 family=2 entries=38 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:43.055000 audit[3932]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:43.055000 audit[3932]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffec2a298d0 a2=0 a3=7ffec2a298bc items=0 ppid=3520 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.126771 kernel: audit: type=1300 audit(1719332743.055:284): arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffec2a298d0 a2=0 a3=7ffec2a298bc items=0 ppid=3520 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.055000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:43.151667 kernel: audit: type=1327 audit(1719332743.055:284): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:43.162072 containerd[1398]: time="2024-06-25T16:25:43.130156997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:43.162072 containerd[1398]: time="2024-06-25T16:25:43.130437749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:43.162072 containerd[1398]: time="2024-06-25T16:25:43.130468894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:43.162072 containerd[1398]: time="2024-06-25T16:25:43.130587966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:43.129000 audit[3951]: NETFILTER_CFG table=filter:103 family=2 entries=38 op=nft_register_chain pid=3951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:43.129000 audit[3951]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffd148f4930 a2=0 a3=7ffd148f491c items=0 ppid=3520 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.212196 kernel: audit: type=1325 audit(1719332743.129:285): table=filter:103 family=2 entries=38 op=nft_register_chain pid=3951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:43.212944 kernel: audit: type=1300 audit(1719332743.129:285): arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffd148f4930 a2=0 a3=7ffd148f491c items=0 ppid=3520 pid=3951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.129000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:43.233283 kernel: audit: type=1327 audit(1719332743.129:285): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:43.286338 containerd[1398]: time="2024-06-25T16:25:43.286121699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:43.286794 containerd[1398]: time="2024-06-25T16:25:43.286742868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:43.286975 containerd[1398]: time="2024-06-25T16:25:43.286940178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:43.287134 containerd[1398]: time="2024-06-25T16:25:43.287100414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:43.343381 containerd[1398]: time="2024-06-25T16:25:43.342412856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-b9hjd,Uid:c210fa31-ccfa-431b-9325-067eea977ba2,Namespace:kube-system,Attempt:1,} returns sandbox id \"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed\"" Jun 25 16:25:43.349996 containerd[1398]: time="2024-06-25T16:25:43.349185088Z" level=info msg="CreateContainer within sandbox \"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:25:43.381966 systemd-networkd[1152]: cali41a6fd8d7cb: Gained IPv6LL Jun 25 16:25:43.400804 containerd[1398]: time="2024-06-25T16:25:43.400639187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-p75ks,Uid:0f943ce0-f09c-40ca-9640-4ebcb02d1c9f,Namespace:calico-system,Attempt:1,} returns sandbox id \"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4\"" Jun 25 16:25:43.439175 containerd[1398]: time="2024-06-25T16:25:43.439069844Z" level=info msg="CreateContainer within sandbox \"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7df701cd13e7e518778d65380ca49bb9744b37bedd8bfb486cd3c5c13fc504fa\"" Jun 25 16:25:43.442625 containerd[1398]: time="2024-06-25T16:25:43.442517967Z" level=info msg="StartContainer for \"7df701cd13e7e518778d65380ca49bb9744b37bedd8bfb486cd3c5c13fc504fa\"" Jun 25 16:25:43.583935 containerd[1398]: time="2024-06-25T16:25:43.583777052Z" level=info msg="StartContainer for \"7df701cd13e7e518778d65380ca49bb9744b37bedd8bfb486cd3c5c13fc504fa\" returns successfully" Jun 25 16:25:43.768458 kubelet[2481]: I0625 16:25:43.767732 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-b9hjd" podStartSLOduration=33.767642674 podCreationTimestamp="2024-06-25 16:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:25:43.767088458 +0000 UTC m=+46.560701898" watchObservedRunningTime="2024-06-25 16:25:43.767642674 +0000 UTC m=+46.561256116" Jun 25 16:25:43.790000 audit[4069]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:43.808719 kernel: audit: type=1325 audit(1719332743.790:286): table=filter:104 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:43.790000 audit[4069]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff52bce660 a2=0 a3=7fff52bce64c items=0 ppid=2636 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.847289 kernel: audit: type=1300 audit(1719332743.790:286): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff52bce660 a2=0 a3=7fff52bce64c items=0 ppid=2636 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:43.811000 audit[4069]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:43.881168 kernel: audit: type=1327 audit(1719332743.790:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:43.881378 kernel: audit: type=1325 audit(1719332743.811:287): table=nat:105 family=2 entries=14 op=nft_register_rule pid=4069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:43.811000 audit[4069]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff52bce660 a2=0 a3=0 items=0 ppid=2636 pid=4069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:43.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:44.403250 systemd-networkd[1152]: cali609de883c81: Gained IPv6LL Jun 25 16:25:44.467958 containerd[1398]: time="2024-06-25T16:25:44.467891832Z" level=info msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" Jun 25 16:25:44.469292 systemd-networkd[1152]: cali244e5e5ec67: Gained IPv6LL Jun 25 16:25:44.476174 containerd[1398]: time="2024-06-25T16:25:44.476125891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:44.477707 containerd[1398]: time="2024-06-25T16:25:44.477631621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:25:44.479352 containerd[1398]: time="2024-06-25T16:25:44.479304645Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:44.484592 containerd[1398]: time="2024-06-25T16:25:44.484545209Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:44.490511 containerd[1398]: time="2024-06-25T16:25:44.490452969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:44.493206 containerd[1398]: time="2024-06-25T16:25:44.493144139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.568510259s" Jun 25 16:25:44.493396 containerd[1398]: time="2024-06-25T16:25:44.493215504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:25:44.495474 containerd[1398]: time="2024-06-25T16:25:44.495429567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:25:44.533233 containerd[1398]: time="2024-06-25T16:25:44.532578720Z" level=info msg="CreateContainer within sandbox \"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:25:44.562946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076930642.mount: Deactivated successfully. Jun 25 16:25:44.568720 containerd[1398]: time="2024-06-25T16:25:44.568646511Z" level=info msg="CreateContainer within sandbox \"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388\"" Jun 25 16:25:44.570171 containerd[1398]: time="2024-06-25T16:25:44.570125716Z" level=info msg="StartContainer for \"633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388\"" Jun 25 16:25:44.666992 systemd[1]: run-containerd-runc-k8s.io-633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388-runc.WSuk3I.mount: Deactivated successfully. Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.634 [INFO][4087] k8s.go 608: Cleaning up netns ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.637 [INFO][4087] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" iface="eth0" netns="/var/run/netns/cni-26d3feed-67ed-91af-653f-b20cdc43358d" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.638 [INFO][4087] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" iface="eth0" netns="/var/run/netns/cni-26d3feed-67ed-91af-653f-b20cdc43358d" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.638 [INFO][4087] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" iface="eth0" netns="/var/run/netns/cni-26d3feed-67ed-91af-653f-b20cdc43358d" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.638 [INFO][4087] k8s.go 615: Releasing IP address(es) ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.638 [INFO][4087] utils.go 188: Calico CNI releasing IP address ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.730 [INFO][4109] ipam_plugin.go 411: Releasing address using handleID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.731 [INFO][4109] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.731 [INFO][4109] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.767 [WARNING][4109] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.771 [INFO][4109] ipam_plugin.go 439: Releasing address using workloadID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.776 [INFO][4109] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:44.796048 containerd[1398]: 2024-06-25 16:25:44.787 [INFO][4087] k8s.go 621: Teardown processing complete. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:44.796084 systemd[1]: run-netns-cni\x2d26d3feed\x2d67ed\x2d91af\x2d653f\x2db20cdc43358d.mount: Deactivated successfully. Jun 25 16:25:44.801529 containerd[1398]: time="2024-06-25T16:25:44.798387816Z" level=info msg="TearDown network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" successfully" Jun 25 16:25:44.801529 containerd[1398]: time="2024-06-25T16:25:44.798454651Z" level=info msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" returns successfully" Jun 25 16:25:44.801529 containerd[1398]: time="2024-06-25T16:25:44.799786958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jmv4k,Uid:50a27c96-8776-4abc-85f1-1753c70aac48,Namespace:kube-system,Attempt:1,}" Jun 25 16:25:44.823000 audit[4127]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:44.823000 audit[4127]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdcc7bc090 a2=0 a3=7ffdcc7bc07c items=0 ppid=2636 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:44.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:44.827000 audit[4127]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4127 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:44.827000 audit[4127]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdcc7bc090 a2=0 a3=7ffdcc7bc07c items=0 ppid=2636 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:44.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:44.852230 containerd[1398]: time="2024-06-25T16:25:44.852155359Z" level=info msg="StartContainer for \"633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388\" returns successfully" Jun 25 16:25:45.139909 systemd-networkd[1152]: califa03c072178: Link UP Jun 25 16:25:45.159825 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:45.160119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califa03c072178: link becomes ready Jun 25 16:25:45.170403 systemd-networkd[1152]: califa03c072178: Gained carrier Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:44.960 [INFO][4134] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0 coredns-5dd5756b68- kube-system 50a27c96-8776-4abc-85f1-1753c70aac48 741 0 2024-06-25 16:25:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal coredns-5dd5756b68-jmv4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa03c072178 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:44.961 [INFO][4134] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.056 [INFO][4148] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" HandleID="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.072 [INFO][4148] ipam_plugin.go 264: Auto assigning IP ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" HandleID="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029be30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"coredns-5dd5756b68-jmv4k", "timestamp":"2024-06-25 16:25:45.05634164 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.072 [INFO][4148] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.072 [INFO][4148] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.072 [INFO][4148] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.075 [INFO][4148] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.082 [INFO][4148] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.089 [INFO][4148] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.093 [INFO][4148] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.108 [INFO][4148] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.108 [INFO][4148] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.110 [INFO][4148] ipam.go 1685: Creating new handle: k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.116 [INFO][4148] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.125 [INFO][4148] ipam.go 1216: Successfully claimed IPs: [192.168.4.68/26] block=192.168.4.64/26 handle="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.126 [INFO][4148] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.68/26] handle="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.126 [INFO][4148] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:45.193157 containerd[1398]: 2024-06-25 16:25:45.126 [INFO][4148] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.68/26] IPv6=[] ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" HandleID="k8s-pod-network.7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.128 [INFO][4134] k8s.go 386: Populated endpoint ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"50a27c96-8776-4abc-85f1-1753c70aac48", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-5dd5756b68-jmv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa03c072178", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.128 [INFO][4134] k8s.go 387: Calico CNI using IPs: [192.168.4.68/32] ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.129 [INFO][4134] dataplane_linux.go 68: Setting the host side veth name to califa03c072178 ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.165 [INFO][4134] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.166 [INFO][4134] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"50a27c96-8776-4abc-85f1-1753c70aac48", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f", Pod:"coredns-5dd5756b68-jmv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa03c072178", MAC:"76:bd:78:90:b1:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:45.194571 containerd[1398]: 2024-06-25 16:25:45.185 [INFO][4134] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f" Namespace="kube-system" Pod="coredns-5dd5756b68-jmv4k" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:45.278849 containerd[1398]: time="2024-06-25T16:25:45.278442276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:45.278849 containerd[1398]: time="2024-06-25T16:25:45.278583913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:45.278849 containerd[1398]: time="2024-06-25T16:25:45.278613683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:45.278849 containerd[1398]: time="2024-06-25T16:25:45.278631726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:45.297000 audit[4180]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:45.297000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffc6ead8bb0 a2=0 a3=7ffc6ead8b9c items=0 ppid=3520 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:45.297000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:45.422503 containerd[1398]: time="2024-06-25T16:25:45.422280902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jmv4k,Uid:50a27c96-8776-4abc-85f1-1753c70aac48,Namespace:kube-system,Attempt:1,} returns sandbox id \"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f\"" Jun 25 16:25:45.427905 containerd[1398]: time="2024-06-25T16:25:45.427846048Z" level=info msg="CreateContainer within sandbox \"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:25:45.449476 containerd[1398]: time="2024-06-25T16:25:45.449382988Z" level=info msg="CreateContainer within sandbox \"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55c02de01d1b3c2f60f44069bb36b90ccc5ceba9a4e0325e361876c43d1ae7e3\"" Jun 25 16:25:45.451008 containerd[1398]: time="2024-06-25T16:25:45.450954514Z" level=info msg="StartContainer for \"55c02de01d1b3c2f60f44069bb36b90ccc5ceba9a4e0325e361876c43d1ae7e3\"" Jun 25 16:25:45.713572 containerd[1398]: time="2024-06-25T16:25:45.713389633Z" level=info msg="StartContainer for \"55c02de01d1b3c2f60f44069bb36b90ccc5ceba9a4e0325e361876c43d1ae7e3\" returns successfully" Jun 25 16:25:45.779815 kubelet[2481]: I0625 16:25:45.779393 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68bf9c55bd-bl5gf" podStartSLOduration=26.208138565 podCreationTimestamp="2024-06-25 16:25:17 +0000 UTC" firstStartedPulling="2024-06-25 16:25:41.922785037 +0000 UTC m=+44.716398455" lastFinishedPulling="2024-06-25 16:25:44.493865323 +0000 UTC m=+47.287478757" observedRunningTime="2024-06-25 16:25:45.772736419 +0000 UTC m=+48.566349860" watchObservedRunningTime="2024-06-25 16:25:45.779218867 +0000 UTC m=+48.572832309" Jun 25 16:25:45.844000 audit[4259]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:45.844000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff8bf76730 a2=0 a3=7fff8bf7671c items=0 ppid=2636 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:45.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:45.851000 audit[4259]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:45.851000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff8bf76730 a2=0 a3=7fff8bf7671c items=0 ppid=2636 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:45.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:45.891852 systemd[1]: run-containerd-runc-k8s.io-633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388-runc.fkWy2R.mount: Deactivated successfully. Jun 25 16:25:46.031358 kubelet[2481]: I0625 16:25:46.030416 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jmv4k" podStartSLOduration=36.030327142 podCreationTimestamp="2024-06-25 16:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:25:45.800755304 +0000 UTC m=+48.594368744" watchObservedRunningTime="2024-06-25 16:25:46.030327142 +0000 UTC m=+48.823940581" Jun 25 16:25:46.139372 containerd[1398]: time="2024-06-25T16:25:46.139295357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:46.141603 containerd[1398]: time="2024-06-25T16:25:46.141467656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:25:46.143292 containerd[1398]: time="2024-06-25T16:25:46.143216774Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:46.146467 containerd[1398]: time="2024-06-25T16:25:46.146425406Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:46.149218 containerd[1398]: time="2024-06-25T16:25:46.149174614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:46.150857 containerd[1398]: time="2024-06-25T16:25:46.150799744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.655137006s" Jun 25 16:25:46.151024 containerd[1398]: time="2024-06-25T16:25:46.150868529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:25:46.156708 containerd[1398]: time="2024-06-25T16:25:46.156646454Z" level=info msg="CreateContainer within sandbox \"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:25:46.190433 containerd[1398]: time="2024-06-25T16:25:46.190326351Z" level=info msg="CreateContainer within sandbox \"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e0b37bc25d1ead26d69788a7b9b575e5917406834463fa33bfc73d51d5a218aa\"" Jun 25 16:25:46.195596 containerd[1398]: time="2024-06-25T16:25:46.191593187Z" level=info msg="StartContainer for \"e0b37bc25d1ead26d69788a7b9b575e5917406834463fa33bfc73d51d5a218aa\"" Jun 25 16:25:46.295225 containerd[1398]: time="2024-06-25T16:25:46.294884530Z" level=info msg="StartContainer for \"e0b37bc25d1ead26d69788a7b9b575e5917406834463fa33bfc73d51d5a218aa\" returns successfully" Jun 25 16:25:46.302008 containerd[1398]: time="2024-06-25T16:25:46.301802142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:25:46.804000 audit[4306]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4306 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:46.804000 audit[4306]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffcca4aef0 a2=0 a3=7fffcca4aedc items=0 ppid=2636 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:46.812000 audit[4306]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4306 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:46.812000 audit[4306]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffcca4aef0 a2=0 a3=7fffcca4aedc items=0 ppid=2636 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:46.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:47.027340 systemd-networkd[1152]: califa03c072178: Gained IPv6LL Jun 25 16:25:47.600380 containerd[1398]: time="2024-06-25T16:25:47.600282776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:47.603151 containerd[1398]: time="2024-06-25T16:25:47.603068803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:25:47.603847 containerd[1398]: time="2024-06-25T16:25:47.603809709Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:47.610326 containerd[1398]: time="2024-06-25T16:25:47.607363384Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:47.611827 containerd[1398]: time="2024-06-25T16:25:47.611781297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:25:47.614812 containerd[1398]: time="2024-06-25T16:25:47.614739275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.312854508s" Jun 25 16:25:47.615064 containerd[1398]: time="2024-06-25T16:25:47.614991633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:25:47.624064 containerd[1398]: time="2024-06-25T16:25:47.622652905Z" level=info msg="CreateContainer within sandbox \"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:25:47.659988 containerd[1398]: time="2024-06-25T16:25:47.659905787Z" level=info msg="CreateContainer within sandbox \"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c0bb07066f8137dc92857626cef5d74f4cb70fe03b4b73fb974948d1db3f3be\"" Jun 25 16:25:47.661929 containerd[1398]: time="2024-06-25T16:25:47.661879711Z" level=info msg="StartContainer for \"3c0bb07066f8137dc92857626cef5d74f4cb70fe03b4b73fb974948d1db3f3be\"" Jun 25 16:25:47.746726 systemd[1]: run-containerd-runc-k8s.io-3c0bb07066f8137dc92857626cef5d74f4cb70fe03b4b73fb974948d1db3f3be-runc.fj0DjA.mount: Deactivated successfully. Jun 25 16:25:47.835637 containerd[1398]: time="2024-06-25T16:25:47.835560890Z" level=info msg="StartContainer for \"3c0bb07066f8137dc92857626cef5d74f4cb70fe03b4b73fb974948d1db3f3be\" returns successfully" Jun 25 16:25:48.645735 kubelet[2481]: I0625 16:25:48.645682 2481 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:25:48.645735 kubelet[2481]: I0625 16:25:48.645746 2481 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:25:50.842528 systemd[1]: run-containerd-runc-k8s.io-a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75-runc.ymeGOu.mount: Deactivated successfully. Jun 25 16:25:50.950654 kubelet[2481]: I0625 16:25:50.950591 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-p75ks" podStartSLOduration=29.739693959 podCreationTimestamp="2024-06-25 16:25:17 +0000 UTC" firstStartedPulling="2024-06-25 16:25:43.404810545 +0000 UTC m=+46.198423970" lastFinishedPulling="2024-06-25 16:25:47.615583547 +0000 UTC m=+50.409196964" observedRunningTime="2024-06-25 16:25:48.791439684 +0000 UTC m=+51.585053127" watchObservedRunningTime="2024-06-25 16:25:50.950466953 +0000 UTC m=+53.744080395" Jun 25 16:25:56.634766 kubelet[2481]: I0625 16:25:56.634697 2481 topology_manager.go:215] "Topology Admit Handler" podUID="01603b95-859d-4b73-9f20-f6f39d9237ad" podNamespace="calico-apiserver" podName="calico-apiserver-855cc674c8-hgbfh" Jun 25 16:25:56.660665 kubelet[2481]: I0625 16:25:56.660597 2481 topology_manager.go:215] "Topology Admit Handler" podUID="8441501d-7308-40c9-8167-2fccd83f2b61" podNamespace="calico-apiserver" podName="calico-apiserver-855cc674c8-6c2xb" Jun 25 16:25:56.762159 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:25:56.762472 kernel: audit: type=1325 audit(1719332756.743:295): table=filter:113 family=2 entries=9 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.743000 audit[4386]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.762641 kubelet[2481]: I0625 16:25:56.761291 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8441501d-7308-40c9-8167-2fccd83f2b61-calico-apiserver-certs\") pod \"calico-apiserver-855cc674c8-6c2xb\" (UID: \"8441501d-7308-40c9-8167-2fccd83f2b61\") " pod="calico-apiserver/calico-apiserver-855cc674c8-6c2xb" Jun 25 16:25:56.762641 kubelet[2481]: I0625 16:25:56.761381 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwp76\" (UniqueName: \"kubernetes.io/projected/8441501d-7308-40c9-8167-2fccd83f2b61-kube-api-access-rwp76\") pod \"calico-apiserver-855cc674c8-6c2xb\" (UID: \"8441501d-7308-40c9-8167-2fccd83f2b61\") " pod="calico-apiserver/calico-apiserver-855cc674c8-6c2xb" Jun 25 16:25:56.762641 kubelet[2481]: I0625 16:25:56.761449 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdh7\" (UniqueName: \"kubernetes.io/projected/01603b95-859d-4b73-9f20-f6f39d9237ad-kube-api-access-ksdh7\") pod \"calico-apiserver-855cc674c8-hgbfh\" (UID: \"01603b95-859d-4b73-9f20-f6f39d9237ad\") " pod="calico-apiserver/calico-apiserver-855cc674c8-hgbfh" Jun 25 16:25:56.762641 kubelet[2481]: I0625 16:25:56.761490 2481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/01603b95-859d-4b73-9f20-f6f39d9237ad-calico-apiserver-certs\") pod \"calico-apiserver-855cc674c8-hgbfh\" (UID: \"01603b95-859d-4b73-9f20-f6f39d9237ad\") " pod="calico-apiserver/calico-apiserver-855cc674c8-hgbfh" Jun 25 16:25:56.743000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe74ab21a0 a2=0 a3=7ffe74ab218c items=0 ppid=2636 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.800290 kernel: audit: type=1300 audit(1719332756.743:295): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe74ab21a0 a2=0 a3=7ffe74ab218c items=0 ppid=2636 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.800481 kernel: audit: type=1327 audit(1719332756.743:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.799000 audit[4386]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.799000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe74ab21a0 a2=0 a3=7ffe74ab218c items=0 ppid=2636 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.863458 kubelet[2481]: E0625 16:25:56.863403 2481 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:25:56.863910 kubelet[2481]: E0625 16:25:56.863891 2481 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01603b95-859d-4b73-9f20-f6f39d9237ad-calico-apiserver-certs podName:01603b95-859d-4b73-9f20-f6f39d9237ad nodeName:}" failed. No retries permitted until 2024-06-25 16:25:57.363817045 +0000 UTC m=+60.157430486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/01603b95-859d-4b73-9f20-f6f39d9237ad-calico-apiserver-certs") pod "calico-apiserver-855cc674c8-hgbfh" (UID: "01603b95-859d-4b73-9f20-f6f39d9237ad") : secret "calico-apiserver-certs" not found Jun 25 16:25:56.865184 kubelet[2481]: E0625 16:25:56.865150 2481 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:25:56.865436 kubelet[2481]: E0625 16:25:56.865422 2481 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8441501d-7308-40c9-8167-2fccd83f2b61-calico-apiserver-certs podName:8441501d-7308-40c9-8167-2fccd83f2b61 nodeName:}" failed. No retries permitted until 2024-06-25 16:25:57.365381239 +0000 UTC m=+60.158994678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8441501d-7308-40c9-8167-2fccd83f2b61-calico-apiserver-certs") pod "calico-apiserver-855cc674c8-6c2xb" (UID: "8441501d-7308-40c9-8167-2fccd83f2b61") : secret "calico-apiserver-certs" not found Jun 25 16:25:56.866753 kernel: audit: type=1325 audit(1719332756.799:296): table=nat:114 family=2 entries=20 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.866875 kernel: audit: type=1300 audit(1719332756.799:296): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe74ab21a0 a2=0 a3=7ffe74ab218c items=0 ppid=2636 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.882356 kernel: audit: type=1327 audit(1719332756.799:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.882616 kernel: audit: type=1325 audit(1719332756.815:297): table=filter:115 family=2 entries=10 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.815000 audit[4388]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.815000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffcb03f1c50 a2=0 a3=7ffcb03f1c3c items=0 ppid=2636 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.948889 kernel: audit: type=1300 audit(1719332756.815:297): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffcb03f1c50 a2=0 a3=7ffcb03f1c3c items=0 ppid=2636 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.965567 kernel: audit: type=1327 audit(1719332756.815:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:56.965690 kernel: audit: type=1325 audit(1719332756.847:298): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.847000 audit[4388]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:25:56.847000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb03f1c50 a2=0 a3=7ffcb03f1c3c items=0 ppid=2636 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:56.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:25:57.409380 containerd[1398]: time="2024-06-25T16:25:57.396338783Z" level=info msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" Jun 25 16:25:57.549531 containerd[1398]: time="2024-06-25T16:25:57.548696276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cc674c8-hgbfh,Uid:01603b95-859d-4b73-9f20-f6f39d9237ad,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.492 [WARNING][4405] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c210fa31-ccfa-431b-9325-067eea977ba2", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed", Pod:"coredns-5dd5756b68-b9hjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609de883c81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.493 [INFO][4405] k8s.go 608: Cleaning up netns ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.493 [INFO][4405] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" iface="eth0" netns="" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.493 [INFO][4405] k8s.go 615: Releasing IP address(es) ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.493 [INFO][4405] utils.go 188: Calico CNI releasing IP address ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.537 [INFO][4413] ipam_plugin.go 411: Releasing address using handleID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.537 [INFO][4413] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.538 [INFO][4413] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.546 [WARNING][4413] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.546 [INFO][4413] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.550 [INFO][4413] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:57.556750 containerd[1398]: 2024-06-25 16:25:57.554 [INFO][4405] k8s.go 621: Teardown processing complete. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:57.557902 containerd[1398]: time="2024-06-25T16:25:57.557838388Z" level=info msg="TearDown network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" successfully" Jun 25 16:25:57.558085 containerd[1398]: time="2024-06-25T16:25:57.558056520Z" level=info msg="StopPodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" returns successfully" Jun 25 16:25:57.559095 containerd[1398]: time="2024-06-25T16:25:57.559050919Z" level=info msg="RemovePodSandbox for \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" Jun 25 16:25:57.559455 containerd[1398]: time="2024-06-25T16:25:57.559368975Z" level=info msg="Forcibly stopping sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\"" Jun 25 16:25:57.584900 containerd[1398]: time="2024-06-25T16:25:57.584830426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cc674c8-6c2xb,Uid:8441501d-7308-40c9-8167-2fccd83f2b61,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:25:57.958843 systemd-networkd[1152]: calib93025ecdce: Link UP Jun 25 16:25:57.968747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:25:57.979314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib93025ecdce: link becomes ready Jun 25 16:25:57.979352 systemd-networkd[1152]: calib93025ecdce: Gained carrier Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.732 [WARNING][4434] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c210fa31-ccfa-431b-9325-067eea977ba2", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"73a1edece781ec5c02c99353c53de06a05a77114decf87ab526620607d5e90ed", Pod:"coredns-5dd5756b68-b9hjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali609de883c81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.732 [INFO][4434] k8s.go 608: Cleaning up netns ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.732 [INFO][4434] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" iface="eth0" netns="" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.732 [INFO][4434] k8s.go 615: Releasing IP address(es) ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.732 [INFO][4434] utils.go 188: Calico CNI releasing IP address ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.870 [INFO][4465] ipam_plugin.go 411: Releasing address using handleID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.870 [INFO][4465] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.933 [INFO][4465] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.985 [WARNING][4465] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.985 [INFO][4465] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" HandleID="k8s-pod-network.1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--b9hjd-eth0" Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.988 [INFO][4465] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.018997 containerd[1398]: 2024-06-25 16:25:57.994 [INFO][4434] k8s.go 621: Teardown processing complete. ContainerID="1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad" Jun 25 16:25:58.018997 containerd[1398]: time="2024-06-25T16:25:58.012191175Z" level=info msg="TearDown network for sandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" successfully" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.728 [INFO][4439] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0 calico-apiserver-855cc674c8- calico-apiserver 01603b95-859d-4b73-9f20-f6f39d9237ad 849 0 2024-06-25 16:25:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855cc674c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal calico-apiserver-855cc674c8-hgbfh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib93025ecdce [] []}} ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.729 [INFO][4439] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.841 [INFO][4469] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" HandleID="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.866 [INFO][4469] ipam_plugin.go 264: Auto assigning IP ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" HandleID="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e59f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"calico-apiserver-855cc674c8-hgbfh", "timestamp":"2024-06-25 16:25:57.841914534 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.867 [INFO][4469] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.867 [INFO][4469] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.867 [INFO][4469] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.870 [INFO][4469] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.877 [INFO][4469] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.891 [INFO][4469] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.895 [INFO][4469] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.900 [INFO][4469] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.900 [INFO][4469] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.907 [INFO][4469] ipam.go 1685: Creating new handle: k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.920 [INFO][4469] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.932 [INFO][4469] ipam.go 1216: Successfully claimed IPs: [192.168.4.69/26] block=192.168.4.64/26 handle="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.932 [INFO][4469] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.69/26] handle="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.932 [INFO][4469] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.052170 containerd[1398]: 2024-06-25 16:25:57.932 [INFO][4469] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.69/26] IPv6=[] ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" HandleID="k8s-pod-network.90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:57.938 [INFO][4439] k8s.go 386: Populated endpoint ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0", GenerateName:"calico-apiserver-855cc674c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"01603b95-859d-4b73-9f20-f6f39d9237ad", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cc674c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-855cc674c8-hgbfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib93025ecdce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:57.938 [INFO][4439] k8s.go 387: Calico CNI using IPs: [192.168.4.69/32] ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:57.939 [INFO][4439] dataplane_linux.go 68: Setting the host side veth name to calib93025ecdce ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:57.987 [INFO][4439] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:57.997 [INFO][4439] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0", GenerateName:"calico-apiserver-855cc674c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"01603b95-859d-4b73-9f20-f6f39d9237ad", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cc674c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e", Pod:"calico-apiserver-855cc674c8-hgbfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib93025ecdce", MAC:"0a:94:37:c3:01:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.053552 containerd[1398]: 2024-06-25 16:25:58.026 [INFO][4439] k8s.go 500: Wrote updated endpoint to datastore ContainerID="90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-hgbfh" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--hgbfh-eth0" Jun 25 16:25:58.067363 containerd[1398]: time="2024-06-25T16:25:58.067301047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:25:58.067758 containerd[1398]: time="2024-06-25T16:25:58.067712417Z" level=info msg="RemovePodSandbox \"1268245009f57238e7353fe93d396cef3d8ab1dc9f11206615e1d68466dcb3ad\" returns successfully" Jun 25 16:25:58.069590 containerd[1398]: time="2024-06-25T16:25:58.069548789Z" level=info msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" Jun 25 16:25:58.103000 audit[4495]: NETFILTER_CFG table=filter:117 family=2 entries=55 op=nft_register_chain pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:58.103000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff4a69ece0 a2=0 a3=7fff4a69eccc items=0 ppid=3520 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:58.103000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:58.118784 systemd-networkd[1152]: calie089dec8d85: Link UP Jun 25 16:25:58.128374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie089dec8d85: link becomes ready Jun 25 16:25:58.129697 systemd-networkd[1152]: calie089dec8d85: Gained carrier Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.778 [INFO][4450] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0 calico-apiserver-855cc674c8- calico-apiserver 8441501d-7308-40c9-8167-2fccd83f2b61 852 0 2024-06-25 16:25:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855cc674c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal calico-apiserver-855cc674c8-6c2xb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie089dec8d85 [] []}} ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.778 [INFO][4450] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.913 [INFO][4476] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" HandleID="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.980 [INFO][4476] ipam_plugin.go 264: Auto assigning IP ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" HandleID="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a48c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", "pod":"calico-apiserver-855cc674c8-6c2xb", "timestamp":"2024-06-25 16:25:57.913725374 +0000 UTC"}, Hostname:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.981 [INFO][4476] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.994 [INFO][4476] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:57.994 [INFO][4476] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal' Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.009 [INFO][4476] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.017 [INFO][4476] ipam.go 372: Looking up existing affinities for host host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.055 [INFO][4476] ipam.go 489: Trying affinity for 192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.058 [INFO][4476] ipam.go 155: Attempting to load block cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.062 [INFO][4476] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.4.64/26 host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.062 [INFO][4476] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.4.64/26 handle="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.069 [INFO][4476] ipam.go 1685: Creating new handle: k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38 Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.081 [INFO][4476] ipam.go 1203: Writing block in order to claim IPs block=192.168.4.64/26 handle="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.098 [INFO][4476] ipam.go 1216: Successfully claimed IPs: [192.168.4.70/26] block=192.168.4.64/26 handle="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.103 [INFO][4476] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.4.70/26] handle="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" host="ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal" Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.103 [INFO][4476] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.167868 containerd[1398]: 2024-06-25 16:25:58.103 [INFO][4476] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.4.70/26] IPv6=[] ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" HandleID="k8s-pod-network.7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.111 [INFO][4450] k8s.go 386: Populated endpoint ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0", GenerateName:"calico-apiserver-855cc674c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8441501d-7308-40c9-8167-2fccd83f2b61", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cc674c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-855cc674c8-6c2xb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie089dec8d85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.112 [INFO][4450] k8s.go 387: Calico CNI using IPs: [192.168.4.70/32] ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.112 [INFO][4450] dataplane_linux.go 68: Setting the host side veth name to calie089dec8d85 ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.131 [INFO][4450] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.133 [INFO][4450] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0", GenerateName:"calico-apiserver-855cc674c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"8441501d-7308-40c9-8167-2fccd83f2b61", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855cc674c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38", Pod:"calico-apiserver-855cc674c8-6c2xb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie089dec8d85", MAC:"62:82:c1:db:94:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.173524 containerd[1398]: 2024-06-25 16:25:58.158 [INFO][4450] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38" Namespace="calico-apiserver" Pod="calico-apiserver-855cc674c8-6c2xb" WorkloadEndpoint="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--apiserver--855cc674c8--6c2xb-eth0" Jun 25 16:25:58.225000 audit[4544]: NETFILTER_CFG table=filter:118 family=2 entries=49 op=nft_register_chain pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:25:58.225000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=24300 a0=3 a1=7fffd89215f0 a2=0 a3=7fffd89215dc items=0 ppid=3520 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:58.225000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:25:58.266075 containerd[1398]: time="2024-06-25T16:25:58.265889975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:58.266514 containerd[1398]: time="2024-06-25T16:25:58.266434364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:58.266735 containerd[1398]: time="2024-06-25T16:25:58.266689414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:58.266886 containerd[1398]: time="2024-06-25T16:25:58.266857340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:58.269825 containerd[1398]: time="2024-06-25T16:25:58.269370882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:25:58.269825 containerd[1398]: time="2024-06-25T16:25:58.269458115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:58.269825 containerd[1398]: time="2024-06-25T16:25:58.269493785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:25:58.269825 containerd[1398]: time="2024-06-25T16:25:58.269524376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:25:58.367677 systemd[1]: run-containerd-runc-k8s.io-90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e-runc.4yEn2J.mount: Deactivated successfully. Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.436 [WARNING][4532] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0", GenerateName:"calico-kube-controllers-68bf9c55bd-", Namespace:"calico-system", SelfLink:"", UID:"b182dc22-0381-43d8-b490-0a8a7b490243", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bf9c55bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7", Pod:"calico-kube-controllers-68bf9c55bd-bl5gf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a6fd8d7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.436 [INFO][4532] k8s.go 608: Cleaning up netns ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.439 [INFO][4532] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" iface="eth0" netns="" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.439 [INFO][4532] k8s.go 615: Releasing IP address(es) ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.439 [INFO][4532] utils.go 188: Calico CNI releasing IP address ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.522 [INFO][4606] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.522 [INFO][4606] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.523 [INFO][4606] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.539 [WARNING][4606] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.539 [INFO][4606] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.543 [INFO][4606] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.547747 containerd[1398]: 2024-06-25 16:25:58.545 [INFO][4532] k8s.go 621: Teardown processing complete. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.554310 containerd[1398]: time="2024-06-25T16:25:58.547817488Z" level=info msg="TearDown network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" successfully" Jun 25 16:25:58.554310 containerd[1398]: time="2024-06-25T16:25:58.547871069Z" level=info msg="StopPodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" returns successfully" Jun 25 16:25:58.554310 containerd[1398]: time="2024-06-25T16:25:58.548670080Z" level=info msg="RemovePodSandbox for \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" Jun 25 16:25:58.554310 containerd[1398]: time="2024-06-25T16:25:58.548730811Z" level=info msg="Forcibly stopping sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\"" Jun 25 16:25:58.575284 containerd[1398]: time="2024-06-25T16:25:58.572441770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cc674c8-hgbfh,Uid:01603b95-859d-4b73-9f20-f6f39d9237ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e\"" Jun 25 16:25:58.583655 containerd[1398]: time="2024-06-25T16:25:58.581235208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:25:58.624894 containerd[1398]: time="2024-06-25T16:25:58.624827848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855cc674c8-6c2xb,Uid:8441501d-7308-40c9-8167-2fccd83f2b61,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38\"" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.656 [WARNING][4642] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0", GenerateName:"calico-kube-controllers-68bf9c55bd-", Namespace:"calico-system", SelfLink:"", UID:"b182dc22-0381-43d8-b490-0a8a7b490243", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bf9c55bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"8144f41afa6265e6cd696311842cad96e9d0e381a4140ec412a227d22b0f93d7", Pod:"calico-kube-controllers-68bf9c55bd-bl5gf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41a6fd8d7cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.657 [INFO][4642] k8s.go 608: Cleaning up netns ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.657 [INFO][4642] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" iface="eth0" netns="" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.657 [INFO][4642] k8s.go 615: Releasing IP address(es) ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.657 [INFO][4642] utils.go 188: Calico CNI releasing IP address ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.702 [INFO][4654] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.702 [INFO][4654] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.702 [INFO][4654] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.716 [WARNING][4654] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.716 [INFO][4654] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" HandleID="k8s-pod-network.d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-calico--kube--controllers--68bf9c55bd--bl5gf-eth0" Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.732 [INFO][4654] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.745466 containerd[1398]: 2024-06-25 16:25:58.740 [INFO][4642] k8s.go 621: Teardown processing complete. ContainerID="d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6" Jun 25 16:25:58.746546 containerd[1398]: time="2024-06-25T16:25:58.745537843Z" level=info msg="TearDown network for sandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" successfully" Jun 25 16:25:58.763057 containerd[1398]: time="2024-06-25T16:25:58.762917622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:25:58.764792 containerd[1398]: time="2024-06-25T16:25:58.764304344Z" level=info msg="RemovePodSandbox \"d0b8ad59c4aa0bdb91fa3103de8e94859cdce44988da7b48985b617719c610d6\" returns successfully" Jun 25 16:25:58.765881 containerd[1398]: time="2024-06-25T16:25:58.765822574Z" level=info msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.870 [WARNING][4688] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4", Pod:"csi-node-driver-p75ks", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali244e5e5ec67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.871 [INFO][4688] k8s.go 608: Cleaning up netns ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.871 [INFO][4688] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" iface="eth0" netns="" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.871 [INFO][4688] k8s.go 615: Releasing IP address(es) ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.871 [INFO][4688] utils.go 188: Calico CNI releasing IP address ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.921 [INFO][4697] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.921 [INFO][4697] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.922 [INFO][4697] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.937 [WARNING][4697] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.937 [INFO][4697] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.960 [INFO][4697] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:58.969415 containerd[1398]: 2024-06-25 16:25:58.963 [INFO][4688] k8s.go 621: Teardown processing complete. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:58.970330 containerd[1398]: time="2024-06-25T16:25:58.969494985Z" level=info msg="TearDown network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" successfully" Jun 25 16:25:58.970330 containerd[1398]: time="2024-06-25T16:25:58.969553640Z" level=info msg="StopPodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" returns successfully" Jun 25 16:25:58.970330 containerd[1398]: time="2024-06-25T16:25:58.970303436Z" level=info msg="RemovePodSandbox for \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" Jun 25 16:25:58.970495 containerd[1398]: time="2024-06-25T16:25:58.970361389Z" level=info msg="Forcibly stopping sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\"" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.094 [WARNING][4720] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0f943ce0-f09c-40ca-9640-4ebcb02d1c9f", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"168368420d136d0c81baea561a6d521beba504b6aa449d1a5dc1d59cce9b04b4", Pod:"csi-node-driver-p75ks", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.4.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali244e5e5ec67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.094 [INFO][4720] k8s.go 608: Cleaning up netns ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.094 [INFO][4720] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" iface="eth0" netns="" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.094 [INFO][4720] k8s.go 615: Releasing IP address(es) ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.094 [INFO][4720] utils.go 188: Calico CNI releasing IP address ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.148 [INFO][4728] ipam_plugin.go 411: Releasing address using handleID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.149 [INFO][4728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.149 [INFO][4728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.161 [WARNING][4728] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.161 [INFO][4728] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" HandleID="k8s-pod-network.9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-csi--node--driver--p75ks-eth0" Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.167 [INFO][4728] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:59.170876 containerd[1398]: 2024-06-25 16:25:59.169 [INFO][4720] k8s.go 621: Teardown processing complete. ContainerID="9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b" Jun 25 16:25:59.171872 containerd[1398]: time="2024-06-25T16:25:59.171072113Z" level=info msg="TearDown network for sandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" successfully" Jun 25 16:25:59.177961 containerd[1398]: time="2024-06-25T16:25:59.177892597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:25:59.178295 containerd[1398]: time="2024-06-25T16:25:59.178117528Z" level=info msg="RemovePodSandbox \"9ea1c6279b6b8b8173156caa430b877f172d0ed0aa60bb7b300986cbf5282e4b\" returns successfully" Jun 25 16:25:59.178984 containerd[1398]: time="2024-06-25T16:25:59.178927976Z" level=info msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" Jun 25 16:25:59.193445 systemd-networkd[1152]: calie089dec8d85: Gained IPv6LL Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.242 [WARNING][4753] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"50a27c96-8776-4abc-85f1-1753c70aac48", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f", Pod:"coredns-5dd5756b68-jmv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa03c072178", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.242 [INFO][4753] k8s.go 608: Cleaning up netns ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.243 [INFO][4753] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" iface="eth0" netns="" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.243 [INFO][4753] k8s.go 615: Releasing IP address(es) ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.243 [INFO][4753] utils.go 188: Calico CNI releasing IP address ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.284 [INFO][4759] ipam_plugin.go 411: Releasing address using handleID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.284 [INFO][4759] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.284 [INFO][4759] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.292 [WARNING][4759] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.292 [INFO][4759] ipam_plugin.go 439: Releasing address using workloadID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.294 [INFO][4759] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:59.298056 containerd[1398]: 2024-06-25 16:25:59.295 [INFO][4753] k8s.go 621: Teardown processing complete. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.298056 containerd[1398]: time="2024-06-25T16:25:59.297436700Z" level=info msg="TearDown network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" successfully" Jun 25 16:25:59.298056 containerd[1398]: time="2024-06-25T16:25:59.297491908Z" level=info msg="StopPodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" returns successfully" Jun 25 16:25:59.299549 containerd[1398]: time="2024-06-25T16:25:59.299497901Z" level=info msg="RemovePodSandbox for \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" Jun 25 16:25:59.299818 containerd[1398]: time="2024-06-25T16:25:59.299734062Z" level=info msg="Forcibly stopping sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\"" Jun 25 16:25:59.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.51:22-139.178.89.65:46556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:25:59.344128 systemd[1]: Started sshd@7-10.128.0.51:22-139.178.89.65:46556.service - OpenSSH per-connection server daemon (139.178.89.65:46556). Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.375 [WARNING][4779] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"50a27c96-8776-4abc-85f1-1753c70aac48", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 25, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815-2-4-22271f29442157caa62b.c.flatcar-212911.internal", ContainerID:"7a611c7a88db3569e1e31dfc113d760bec989b823a997886b98a04f0ec41dc9f", Pod:"coredns-5dd5756b68-jmv4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa03c072178", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.375 [INFO][4779] k8s.go 608: Cleaning up netns ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.375 [INFO][4779] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" iface="eth0" netns="" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.375 [INFO][4779] k8s.go 615: Releasing IP address(es) ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.375 [INFO][4779] utils.go 188: Calico CNI releasing IP address ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.476 [INFO][4787] ipam_plugin.go 411: Releasing address using handleID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.477 [INFO][4787] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.477 [INFO][4787] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.489 [WARNING][4787] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.489 [INFO][4787] ipam_plugin.go 439: Releasing address using workloadID ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" HandleID="k8s-pod-network.501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Workload="ci--3815--2--4--22271f29442157caa62b.c.flatcar--212911.internal-k8s-coredns--5dd5756b68--jmv4k-eth0" Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.491 [INFO][4787] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:25:59.501204 containerd[1398]: 2024-06-25 16:25:59.498 [INFO][4779] k8s.go 621: Teardown processing complete. ContainerID="501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5" Jun 25 16:25:59.502409 containerd[1398]: time="2024-06-25T16:25:59.502314942Z" level=info msg="TearDown network for sandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" successfully" Jun 25 16:25:59.508356 systemd-networkd[1152]: calib93025ecdce: Gained IPv6LL Jun 25 16:25:59.518628 containerd[1398]: time="2024-06-25T16:25:59.518552721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:25:59.518864 containerd[1398]: time="2024-06-25T16:25:59.518684193Z" level=info msg="RemovePodSandbox \"501883605f4ad691633c49909c4713f12e448c32475a58193603838ff082b0a5\" returns successfully" Jun 25 16:25:59.659000 audit[4784]: USER_ACCT pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:59.662117 sshd[4784]: Accepted publickey for core from 139.178.89.65 port 46556 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:25:59.662000 audit[4784]: CRED_ACQ pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:59.662000 audit[4784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5d2ba5f0 a2=3 a3=7f087efa5480 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:25:59.662000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:25:59.663621 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:25:59.674431 systemd-logind[1382]: New session 8 of user core. Jun 25 16:25:59.678789 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:25:59.692000 audit[4784]: USER_START pid=4784 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:25:59.696000 audit[4794]: CRED_ACQ pid=4794 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:00.048659 sshd[4784]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:00.052000 audit[4784]: USER_END pid=4784 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:00.052000 audit[4784]: CRED_DISP pid=4784 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:00.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.51:22-139.178.89.65:46556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:00.057428 systemd[1]: sshd@7-10.128.0.51:22-139.178.89.65:46556.service: Deactivated successfully. Jun 25 16:26:00.059234 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:26:00.066778 systemd-logind[1382]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:26:00.070801 systemd-logind[1382]: Removed session 8. Jun 25 16:26:01.287623 containerd[1398]: time="2024-06-25T16:26:01.287548538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.290207 containerd[1398]: time="2024-06-25T16:26:01.290129593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:26:01.292156 containerd[1398]: time="2024-06-25T16:26:01.292111147Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.296049 containerd[1398]: time="2024-06-25T16:26:01.296007833Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.302282 containerd[1398]: time="2024-06-25T16:26:01.302215969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.304794 containerd[1398]: time="2024-06-25T16:26:01.304720039Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.723380237s" Jun 25 16:26:01.304952 containerd[1398]: time="2024-06-25T16:26:01.304802038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:26:01.307144 containerd[1398]: time="2024-06-25T16:26:01.307097146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:26:01.308465 containerd[1398]: time="2024-06-25T16:26:01.308373230Z" level=info msg="CreateContainer within sandbox \"90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:26:01.351415 containerd[1398]: time="2024-06-25T16:26:01.351349144Z" level=info msg="CreateContainer within sandbox \"90534972a65e8ebf9f1a85aea820ad3e34da722a06b5d2d359eca7324188321e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b88d751db0606c2e08e134d199b1bb143f45dcdd5dffc9387e5dc29c2243e1c\"" Jun 25 16:26:01.352769 containerd[1398]: time="2024-06-25T16:26:01.352715082Z" level=info msg="StartContainer for \"5b88d751db0606c2e08e134d199b1bb143f45dcdd5dffc9387e5dc29c2243e1c\"" Jun 25 16:26:01.359727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711723350.mount: Deactivated successfully. Jun 25 16:26:01.527305 containerd[1398]: time="2024-06-25T16:26:01.527218355Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.529365 containerd[1398]: time="2024-06-25T16:26:01.529287290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 16:26:01.531062 containerd[1398]: time="2024-06-25T16:26:01.531009963Z" level=info msg="ImageUpdate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.534599 containerd[1398]: time="2024-06-25T16:26:01.534551992Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.538513 containerd[1398]: time="2024-06-25T16:26:01.538366029Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:26:01.541240 containerd[1398]: time="2024-06-25T16:26:01.541176089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 233.798463ms" Jun 25 16:26:01.541485 containerd[1398]: time="2024-06-25T16:26:01.541446933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:26:01.547099 containerd[1398]: time="2024-06-25T16:26:01.547048146Z" level=info msg="CreateContainer within sandbox \"7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:26:01.594224 containerd[1398]: time="2024-06-25T16:26:01.594138539Z" level=info msg="CreateContainer within sandbox \"7e4c3a27e9272465a43a7d3b56a20bf5ec7228f8e4e20a1aebeaad1e0b04fe38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"016ee712acff83a466cd9898b0aabbdf7e7403a5185ba86e0f7add7369e72665\"" Jun 25 16:26:01.595792 containerd[1398]: time="2024-06-25T16:26:01.595738559Z" level=info msg="StartContainer for \"016ee712acff83a466cd9898b0aabbdf7e7403a5185ba86e0f7add7369e72665\"" Jun 25 16:26:01.610635 containerd[1398]: time="2024-06-25T16:26:01.610572572Z" level=info msg="StartContainer for \"5b88d751db0606c2e08e134d199b1bb143f45dcdd5dffc9387e5dc29c2243e1c\" returns successfully" Jun 25 16:26:01.872184 kubelet[2481]: I0625 16:26:01.871499 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855cc674c8-hgbfh" podStartSLOduration=3.146909616 podCreationTimestamp="2024-06-25 16:25:56 +0000 UTC" firstStartedPulling="2024-06-25 16:25:58.580794182 +0000 UTC m=+61.374407611" lastFinishedPulling="2024-06-25 16:26:01.305282695 +0000 UTC m=+64.098896135" observedRunningTime="2024-06-25 16:26:01.870211665 +0000 UTC m=+64.663825107" watchObservedRunningTime="2024-06-25 16:26:01.87139814 +0000 UTC m=+64.665011581" Jun 25 16:26:01.891529 containerd[1398]: time="2024-06-25T16:26:01.891445901Z" level=info msg="StartContainer for \"016ee712acff83a466cd9898b0aabbdf7e7403a5185ba86e0f7add7369e72665\" returns successfully" Jun 25 16:26:01.950000 audit[4886]: NETFILTER_CFG table=filter:119 family=2 entries=10 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:01.959219 kernel: kauditd_printk_skb: 19 callbacks suppressed Jun 25 16:26:01.959572 kernel: audit: type=1325 audit(1719332761.950:310): table=filter:119 family=2 entries=10 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:01.950000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffcdfc93560 a2=0 a3=7ffcdfc9354c items=0 ppid=2636 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:02.014629 kernel: audit: type=1300 audit(1719332761.950:310): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffcdfc93560 a2=0 a3=7ffcdfc9354c items=0 ppid=2636 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:01.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.030540 kernel: audit: type=1327 audit(1719332761.950:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:01.952000 audit[4886]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:02.080823 kernel: audit: type=1325 audit(1719332761.952:311): table=nat:120 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:02.081063 kernel: audit: type=1300 audit(1719332761.952:311): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcdfc93560 a2=0 a3=7ffcdfc9354c items=0 ppid=2636 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:01.952000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcdfc93560 a2=0 a3=7ffcdfc9354c items=0 ppid=2636 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:01.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.097313 kernel: audit: type=1327 audit(1719332761.952:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.338656 systemd[1]: run-containerd-runc-k8s.io-5b88d751db0606c2e08e134d199b1bb143f45dcdd5dffc9387e5dc29c2243e1c-runc.aYErCL.mount: Deactivated successfully. Jun 25 16:26:02.910000 audit[4890]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:02.962157 kernel: audit: type=1325 audit(1719332762.910:312): table=filter:121 family=2 entries=10 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:02.962372 kernel: audit: type=1300 audit(1719332762.910:312): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff1f438780 a2=0 a3=7fff1f43876c items=0 ppid=2636 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:02.910000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff1f438780 a2=0 a3=7fff1f43876c items=0 ppid=2636 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:02.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.979352 kernel: audit: type=1327 audit(1719332762.910:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.916000 audit[4890]: NETFILTER_CFG table=nat:122 family=2 entries=20 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:02.916000 audit[4890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1f438780 a2=0 a3=7fff1f43876c items=0 ppid=2636 pid=4890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:02.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:02.996293 kernel: audit: type=1325 audit(1719332762.916:313): table=nat:122 family=2 entries=20 op=nft_register_rule pid=4890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:04.067223 kubelet[2481]: I0625 16:26:04.067179 2481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-855cc674c8-6c2xb" podStartSLOduration=5.151681309 podCreationTimestamp="2024-06-25 16:25:56 +0000 UTC" firstStartedPulling="2024-06-25 16:25:58.627196863 +0000 UTC m=+61.420810283" lastFinishedPulling="2024-06-25 16:26:01.542619147 +0000 UTC m=+64.336232584" observedRunningTime="2024-06-25 16:26:02.871017194 +0000 UTC m=+65.664630626" watchObservedRunningTime="2024-06-25 16:26:04.06710361 +0000 UTC m=+66.860717052" Jun 25 16:26:04.105000 audit[4894]: NETFILTER_CFG table=filter:123 family=2 entries=9 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:04.105000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff3d4e8290 a2=0 a3=7fff3d4e827c items=0 ppid=2636 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:04.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:04.107000 audit[4894]: NETFILTER_CFG table=nat:124 family=2 entries=27 op=nft_register_chain pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:04.107000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff3d4e8290 a2=0 a3=7fff3d4e827c items=0 ppid=2636 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:04.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:04.506000 audit[4896]: NETFILTER_CFG table=filter:125 family=2 entries=8 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:04.506000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd991d3ea0 a2=0 a3=7ffd991d3e8c items=0 ppid=2636 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:04.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:04.508000 audit[4896]: NETFILTER_CFG table=nat:126 family=2 entries=34 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:04.508000 audit[4896]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffd991d3ea0 a2=0 a3=7ffd991d3e8c items=0 ppid=2636 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:04.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:05.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.51:22-139.178.89.65:46558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:05.097510 systemd[1]: Started sshd@8-10.128.0.51:22-139.178.89.65:46558.service - OpenSSH per-connection server daemon (139.178.89.65:46558). Jun 25 16:26:05.389000 audit[4897]: USER_ACCT pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.391023 sshd[4897]: Accepted publickey for core from 139.178.89.65 port 46558 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:05.393000 audit[4897]: CRED_ACQ pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.393000 audit[4897]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe165f58a0 a2=3 a3=7f48d354c480 items=0 ppid=1 pid=4897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:05.393000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:05.394611 sshd[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:05.403490 systemd-logind[1382]: New session 9 of user core. Jun 25 16:26:05.407723 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:26:05.417000 audit[4897]: USER_START pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.420000 audit[4900]: CRED_ACQ pid=4900 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.683520 sshd[4897]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:05.685000 audit[4897]: USER_END pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.685000 audit[4897]: CRED_DISP pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:05.688513 systemd[1]: sshd@8-10.128.0.51:22-139.178.89.65:46558.service: Deactivated successfully. Jun 25 16:26:05.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.51:22-139.178.89.65:46558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:05.691114 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:26:05.691546 systemd-logind[1382]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:26:05.693992 systemd-logind[1382]: Removed session 9. Jun 25 16:26:10.732195 systemd[1]: Started sshd@9-10.128.0.51:22-139.178.89.65:57730.service - OpenSSH per-connection server daemon (139.178.89.65:57730). Jun 25 16:26:10.754281 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:26:10.754469 kernel: audit: type=1130 audit(1719332770.732:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.51:22-139.178.89.65:57730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:10.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.51:22-139.178.89.65:57730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.018000 audit[4920]: USER_ACCT pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.022596 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:11.025065 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 57730 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:11.034701 systemd-logind[1382]: New session 10 of user core. Jun 25 16:26:11.130796 kernel: audit: type=1101 audit(1719332771.018:328): pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.130877 kernel: audit: type=1103 audit(1719332771.020:329): pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.130942 kernel: audit: type=1006 audit(1719332771.020:330): pid=4920 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:26:11.130982 kernel: audit: type=1300 audit(1719332771.020:330): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee3805b60 a2=3 a3=7fb672587480 items=0 ppid=1 pid=4920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:11.131014 kernel: audit: type=1327 audit(1719332771.020:330): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:11.020000 audit[4920]: CRED_ACQ pid=4920 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.020000 audit[4920]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee3805b60 a2=3 a3=7fb672587480 items=0 ppid=1 pid=4920 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:11.020000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:11.130873 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:26:11.140000 audit[4920]: USER_START pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.144000 audit[4923]: CRED_ACQ pid=4923 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.197394 kernel: audit: type=1105 audit(1719332771.140:331): pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.197579 kernel: audit: type=1103 audit(1719332771.144:332): pid=4923 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.388963 sshd[4920]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:11.391000 audit[4920]: USER_END pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.395061 systemd-logind[1382]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:26:11.397061 systemd[1]: sshd@9-10.128.0.51:22-139.178.89.65:57730.service: Deactivated successfully. Jun 25 16:26:11.398558 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:26:11.401061 systemd-logind[1382]: Removed session 10. Jun 25 16:26:11.391000 audit[4920]: CRED_DISP pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.424395 kernel: audit: type=1106 audit(1719332771.391:333): pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.424450 kernel: audit: type=1104 audit(1719332771.391:334): pid=4920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.51:22-139.178.89.65:57730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.458035 systemd[1]: Started sshd@10-10.128.0.51:22-139.178.89.65:57742.service - OpenSSH per-connection server daemon (139.178.89.65:57742). Jun 25 16:26:11.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.51:22-139.178.89.65:57742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:11.742000 audit[4934]: USER_ACCT pid=4934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.743065 sshd[4934]: Accepted publickey for core from 139.178.89.65 port 57742 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:11.744000 audit[4934]: CRED_ACQ pid=4934 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.744000 audit[4934]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff243e1b10 a2=3 a3=7f492297b480 items=0 ppid=1 pid=4934 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:11.744000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:11.745130 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:11.751887 systemd-logind[1382]: New session 11 of user core. Jun 25 16:26:11.756636 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:26:11.764000 audit[4934]: USER_START pid=4934 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:11.767000 audit[4937]: CRED_ACQ pid=4937 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:12.874891 sshd[4934]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:12.877000 audit[4934]: USER_END pid=4934 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:12.877000 audit[4934]: CRED_DISP pid=4934 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:12.880593 systemd[1]: sshd@10-10.128.0.51:22-139.178.89.65:57742.service: Deactivated successfully. Jun 25 16:26:12.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.51:22-139.178.89.65:57742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.882109 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:26:12.884166 systemd-logind[1382]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:26:12.885950 systemd-logind[1382]: Removed session 11. Jun 25 16:26:12.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.51:22-139.178.89.65:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:12.923021 systemd[1]: Started sshd@11-10.128.0.51:22-139.178.89.65:57744.service - OpenSSH per-connection server daemon (139.178.89.65:57744). Jun 25 16:26:13.207000 audit[4944]: USER_ACCT pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.209033 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 57744 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:13.209000 audit[4944]: CRED_ACQ pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.209000 audit[4944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd7177e20 a2=3 a3=7f2b21188480 items=0 ppid=1 pid=4944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:13.209000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:13.211637 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:13.219547 systemd-logind[1382]: New session 12 of user core. Jun 25 16:26:13.227790 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:26:13.236000 audit[4944]: USER_START pid=4944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.239000 audit[4947]: CRED_ACQ pid=4947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.507010 sshd[4944]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:13.509000 audit[4944]: USER_END pid=4944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.509000 audit[4944]: CRED_DISP pid=4944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:13.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.51:22-139.178.89.65:57744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:13.513936 systemd[1]: sshd@11-10.128.0.51:22-139.178.89.65:57744.service: Deactivated successfully. Jun 25 16:26:13.515849 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:26:13.519034 systemd-logind[1382]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:26:13.521497 systemd-logind[1382]: Removed session 12. Jun 25 16:26:18.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.51:22-139.178.89.65:46812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.560524 systemd[1]: Started sshd@12-10.128.0.51:22-139.178.89.65:46812.service - OpenSSH per-connection server daemon (139.178.89.65:46812). Jun 25 16:26:18.566426 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:26:18.566569 kernel: audit: type=1130 audit(1719332778.559:354): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.51:22-139.178.89.65:46812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:18.854000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.868697 sshd[4969]: Accepted publickey for core from 139.178.89.65 port 46812 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:18.870720 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:18.883060 systemd-logind[1382]: New session 13 of user core. Jun 25 16:26:18.928902 kernel: audit: type=1101 audit(1719332778.854:355): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.928968 kernel: audit: type=1103 audit(1719332778.865:356): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.929008 kernel: audit: type=1006 audit(1719332778.865:357): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jun 25 16:26:18.929081 kernel: audit: type=1300 audit(1719332778.865:357): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29101740 a2=3 a3=7f92d3ff2480 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:18.865000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.865000 audit[4969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd29101740 a2=3 a3=7f92d3ff2480 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:18.928107 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:26:18.958081 kernel: audit: type=1327 audit(1719332778.865:357): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:18.865000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:18.940000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.997966 kernel: audit: type=1105 audit(1719332778.940:358): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:18.955000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.023305 kernel: audit: type=1103 audit(1719332778.955:359): pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.272438 sshd[4969]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:19.277000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.311362 kernel: audit: type=1106 audit(1719332779.277:360): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.282000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.333001 systemd[1]: sshd@12-10.128.0.51:22-139.178.89.65:46812.service: Deactivated successfully. Jun 25 16:26:19.334860 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:26:19.337889 systemd-logind[1382]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:26:19.345582 kernel: audit: type=1104 audit(1719332779.282:361): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:19.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.51:22-139.178.89.65:46812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:19.346599 systemd-logind[1382]: Removed session 13. Jun 25 16:26:20.820873 systemd[1]: run-containerd-runc-k8s.io-a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75-runc.K8uKQu.mount: Deactivated successfully. Jun 25 16:26:24.324235 systemd[1]: Started sshd@13-10.128.0.51:22-139.178.89.65:46818.service - OpenSSH per-connection server daemon (139.178.89.65:46818). Jun 25 16:26:24.345707 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:24.345922 kernel: audit: type=1130 audit(1719332784.323:363): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.51:22-139.178.89.65:46818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.51:22-139.178.89.65:46818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:24.616000 audit[5008]: USER_ACCT pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.622401 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 46818 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:24.624519 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:24.642302 systemd-logind[1382]: New session 14 of user core. Jun 25 16:26:24.622000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.649929 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:26:24.673184 kernel: audit: type=1101 audit(1719332784.616:364): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.673475 kernel: audit: type=1103 audit(1719332784.622:365): pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.622000 audit[5008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc87b75a00 a2=3 a3=7f673c872480 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.725695 kernel: audit: type=1006 audit(1719332784.622:366): pid=5008 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:26:24.726001 kernel: audit: type=1300 audit(1719332784.622:366): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc87b75a00 a2=3 a3=7f673c872480 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:24.726056 kernel: audit: type=1327 audit(1719332784.622:366): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:24.622000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:24.657000 audit[5008]: USER_START pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.767330 kernel: audit: type=1105 audit(1719332784.657:367): pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.767632 kernel: audit: type=1103 audit(1719332784.677:368): pid=5011 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.677000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.942309 sshd[5008]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:24.943000 audit[5008]: USER_END pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.950033 systemd-logind[1382]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:26:24.953015 systemd[1]: sshd@13-10.128.0.51:22-139.178.89.65:46818.service: Deactivated successfully. Jun 25 16:26:24.954743 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:26:24.957751 systemd-logind[1382]: Removed session 14. Jun 25 16:26:24.943000 audit[5008]: CRED_DISP pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:25.001361 kernel: audit: type=1106 audit(1719332784.943:369): pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:25.001563 kernel: audit: type=1104 audit(1719332784.943:370): pid=5008 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:24.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.51:22-139.178.89.65:46818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:28.744223 systemd[1]: run-containerd-runc-k8s.io-633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388-runc.ECM6ro.mount: Deactivated successfully. Jun 25 16:26:29.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.51:22-139.178.89.65:55234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:29.993304 systemd[1]: Started sshd@14-10.128.0.51:22-139.178.89.65:55234.service - OpenSSH per-connection server daemon (139.178.89.65:55234). Jun 25 16:26:30.023904 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:30.024040 kernel: audit: type=1130 audit(1719332789.993:372): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.51:22-139.178.89.65:55234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:30.289000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.292971 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:30.294595 sshd[5059]: Accepted publickey for core from 139.178.89.65 port 55234 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:30.305806 systemd-logind[1382]: New session 15 of user core. Jun 25 16:26:30.392065 kernel: audit: type=1101 audit(1719332790.289:373): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.392128 kernel: audit: type=1103 audit(1719332790.291:374): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.392165 kernel: audit: type=1006 audit(1719332790.292:375): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:26:30.392204 kernel: audit: type=1300 audit(1719332790.292:375): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8da5b600 a2=3 a3=7f19774a7480 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:30.392270 kernel: audit: type=1327 audit(1719332790.292:375): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:30.291000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.292000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8da5b600 a2=3 a3=7f19774a7480 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:30.292000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:30.391240 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:26:30.410797 kernel: audit: type=1105 audit(1719332790.401:376): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.401000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.412000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.436717 kernel: audit: type=1103 audit(1719332790.412:377): pid=5062 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.671165 sshd[5059]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:30.673000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.686776 systemd[1]: sshd@14-10.128.0.51:22-139.178.89.65:55234.service: Deactivated successfully. Jun 25 16:26:30.688784 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:26:30.691728 systemd-logind[1382]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:26:30.694039 systemd-logind[1382]: Removed session 15. Jun 25 16:26:30.673000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.707312 kernel: audit: type=1106 audit(1719332790.673:378): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.707437 kernel: audit: type=1104 audit(1719332790.673:379): pid=5059 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:30.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.51:22-139.178.89.65:55234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.721084 systemd[1]: Started sshd@15-10.128.0.51:22-139.178.89.65:55242.service - OpenSSH per-connection server daemon (139.178.89.65:55242). Jun 25 16:26:35.740959 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:35.741127 kernel: audit: type=1130 audit(1719332795.721:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.51:22-139.178.89.65:55242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:35.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.51:22-139.178.89.65:55242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:36.009000 audit[5077]: USER_ACCT pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.013545 sshd[5077]: Accepted publickey for core from 139.178.89.65 port 55242 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:36.015384 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:36.027195 systemd-logind[1382]: New session 16 of user core. Jun 25 16:26:36.085627 kernel: audit: type=1101 audit(1719332796.009:382): pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.085696 kernel: audit: type=1103 audit(1719332796.009:383): pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.085730 kernel: audit: type=1006 audit(1719332796.009:384): pid=5077 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:26:36.085776 kernel: audit: type=1300 audit(1719332796.009:384): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd22a48350 a2=3 a3=7fca01f46480 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:36.009000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.009000 audit[5077]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd22a48350 a2=3 a3=7fca01f46480 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:36.084871 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:26:36.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:36.112304 kernel: audit: type=1327 audit(1719332796.009:384): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:36.097000 audit[5077]: USER_START pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.152966 kernel: audit: type=1105 audit(1719332796.097:385): pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.153157 kernel: audit: type=1103 audit(1719332796.102:386): pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.102000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.353816 sshd[5077]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:36.355000 audit[5077]: USER_END pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.356000 audit[5077]: CRED_DISP pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.390452 kernel: audit: type=1106 audit(1719332796.355:387): pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.390567 kernel: audit: type=1104 audit(1719332796.356:388): pid=5077 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.400364 systemd[1]: sshd@15-10.128.0.51:22-139.178.89.65:55242.service: Deactivated successfully. Jun 25 16:26:36.401682 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:26:36.403714 systemd-logind[1382]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:26:36.405452 systemd-logind[1382]: Removed session 16. Jun 25 16:26:36.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.51:22-139.178.89.65:55242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:36.424031 systemd[1]: Started sshd@16-10.128.0.51:22-139.178.89.65:43438.service - OpenSSH per-connection server daemon (139.178.89.65:43438). Jun 25 16:26:36.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.51:22-139.178.89.65:43438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:36.705000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.707408 sshd[5090]: Accepted publickey for core from 139.178.89.65 port 43438 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:36.707000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.707000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca1d95870 a2=3 a3=7f61fa4c2480 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:36.707000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:36.709494 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:36.715373 systemd-logind[1382]: New session 17 of user core. Jun 25 16:26:36.718652 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:26:36.727000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:36.729000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.068491 sshd[5090]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:37.070000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.070000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.073640 systemd[1]: sshd@16-10.128.0.51:22-139.178.89.65:43438.service: Deactivated successfully. Jun 25 16:26:37.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.51:22-139.178.89.65:43438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:37.075685 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:26:37.075721 systemd-logind[1382]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:26:37.078368 systemd-logind[1382]: Removed session 17. Jun 25 16:26:37.116998 systemd[1]: Started sshd@17-10.128.0.51:22-139.178.89.65:43454.service - OpenSSH per-connection server daemon (139.178.89.65:43454). Jun 25 16:26:37.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.51:22-139.178.89.65:43454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:37.401000 audit[5100]: USER_ACCT pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.402774 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 43454 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:37.403000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.403000 audit[5100]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6d3d2d20 a2=3 a3=7fd335d09480 items=0 ppid=1 pid=5100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:37.403000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:37.405314 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:37.412211 systemd-logind[1382]: New session 18 of user core. Jun 25 16:26:37.416688 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:26:37.424000 audit[5100]: USER_START pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:37.427000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:38.643000 audit[5115]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.643000 audit[5115]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff1bfaf310 a2=0 a3=7fff1bfaf2fc items=0 ppid=2636 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.645000 audit[5115]: NETFILTER_CFG table=nat:128 family=2 entries=22 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.645000 audit[5115]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff1bfaf310 a2=0 a3=0 items=0 ppid=2636 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.654599 sshd[5100]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:38.655000 audit[5100]: USER_END pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:38.656000 audit[5100]: CRED_DISP pid=5100 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:38.662826 systemd-logind[1382]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:26:38.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.51:22-139.178.89.65:43454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.667686 systemd[1]: sshd@17-10.128.0.51:22-139.178.89.65:43454.service: Deactivated successfully. Jun 25 16:26:38.669338 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:26:38.673481 systemd-logind[1382]: Removed session 18. Jun 25 16:26:38.703152 systemd[1]: Started sshd@18-10.128.0.51:22-139.178.89.65:43460.service - OpenSSH per-connection server daemon (139.178.89.65:43460). Jun 25 16:26:38.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.51:22-139.178.89.65:43460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:38.703000 audit[5120]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=5120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.703000 audit[5120]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffde53628f0 a2=0 a3=7ffde53628dc items=0 ppid=2636 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.707000 audit[5120]: NETFILTER_CFG table=nat:130 family=2 entries=22 op=nft_register_rule pid=5120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:38.707000 audit[5120]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffde53628f0 a2=0 a3=0 items=0 ppid=2636 pid=5120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:38.993000 audit[5119]: USER_ACCT pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:38.995813 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 43460 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:38.996000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:38.996000 audit[5119]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff61c9f9a0 a2=3 a3=7f60786ac480 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:38.996000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:38.998188 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:39.009715 systemd-logind[1382]: New session 19 of user core. Jun 25 16:26:39.012694 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:26:39.023000 audit[5119]: USER_START pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.027000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.620310 sshd[5119]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:39.622000 audit[5119]: USER_END pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.622000 audit[5119]: CRED_DISP pid=5119 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.625363 systemd[1]: sshd@18-10.128.0.51:22-139.178.89.65:43460.service: Deactivated successfully. Jun 25 16:26:39.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.51:22-139.178.89.65:43460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:39.627678 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:26:39.627716 systemd-logind[1382]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:26:39.629432 systemd-logind[1382]: Removed session 19. Jun 25 16:26:39.669130 systemd[1]: Started sshd@19-10.128.0.51:22-139.178.89.65:43470.service - OpenSSH per-connection server daemon (139.178.89.65:43470). Jun 25 16:26:39.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.51:22-139.178.89.65:43470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:39.954000 audit[5131]: USER_ACCT pid=5131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.955322 sshd[5131]: Accepted publickey for core from 139.178.89.65 port 43470 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:39.956000 audit[5131]: CRED_ACQ pid=5131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.956000 audit[5131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc143d97a0 a2=3 a3=7f655a49e480 items=0 ppid=1 pid=5131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:39.956000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:39.957398 sshd[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:39.964613 systemd-logind[1382]: New session 20 of user core. Jun 25 16:26:39.968698 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:26:39.978000 audit[5131]: USER_START pid=5131 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:39.981000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:40.241645 sshd[5131]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:40.243000 audit[5131]: USER_END pid=5131 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:40.244000 audit[5131]: CRED_DISP pid=5131 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:40.247052 systemd[1]: sshd@19-10.128.0.51:22-139.178.89.65:43470.service: Deactivated successfully. Jun 25 16:26:40.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.51:22-139.178.89.65:43470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:40.249417 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:26:40.249479 systemd-logind[1382]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:26:40.252086 systemd-logind[1382]: Removed session 20. Jun 25 16:26:44.707000 audit[5152]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:44.714953 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:26:44.715137 kernel: audit: type=1325 audit(1719332804.707:430): table=filter:131 family=2 entries=20 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:44.707000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd5c53f820 a2=0 a3=7ffd5c53f80c items=0 ppid=2636 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:44.779802 kernel: audit: type=1300 audit(1719332804.707:430): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd5c53f820 a2=0 a3=7ffd5c53f80c items=0 ppid=2636 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:44.780011 kernel: audit: type=1327 audit(1719332804.707:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:44.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:44.712000 audit[5152]: NETFILTER_CFG table=nat:132 family=2 entries=106 op=nft_register_chain pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:44.796467 kernel: audit: type=1325 audit(1719332804.712:431): table=nat:132 family=2 entries=106 op=nft_register_chain pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:26:44.796680 kernel: audit: type=1300 audit(1719332804.712:431): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd5c53f820 a2=0 a3=7ffd5c53f80c items=0 ppid=2636 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:44.712000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd5c53f820 a2=0 a3=7ffd5c53f80c items=0 ppid=2636 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:44.830902 kernel: audit: type=1327 audit(1719332804.712:431): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:44.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:26:45.298417 systemd[1]: Started sshd@20-10.128.0.51:22-139.178.89.65:43484.service - OpenSSH per-connection server daemon (139.178.89.65:43484). Jun 25 16:26:45.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.51:22-139.178.89.65:43484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:45.324298 kernel: audit: type=1130 audit(1719332805.297:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.51:22-139.178.89.65:43484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:45.596000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:45.600236 sshd[5154]: Accepted publickey for core from 139.178.89.65 port 43484 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:45.602124 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:45.614250 systemd-logind[1382]: New session 21 of user core. Jun 25 16:26:45.669810 kernel: audit: type=1101 audit(1719332805.596:433): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:45.669878 kernel: audit: type=1103 audit(1719332805.600:434): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:45.669923 kernel: audit: type=1006 audit(1719332805.600:435): pid=5154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 16:26:45.600000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:45.600000 audit[5154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0545ef00 a2=3 a3=7f4c1a4b2480 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:45.600000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:45.669899 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:26:45.679000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:45.682000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:46.016791 sshd[5154]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:46.017000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:46.018000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:46.023390 systemd[1]: sshd@20-10.128.0.51:22-139.178.89.65:43484.service: Deactivated successfully. Jun 25 16:26:46.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.51:22-139.178.89.65:43484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:46.025797 systemd-logind[1382]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:26:46.025871 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:26:46.027988 systemd-logind[1382]: Removed session 21. Jun 25 16:26:50.822105 systemd[1]: run-containerd-runc-k8s.io-a6dcdde2a8287aef3cd620b66223397ef4220bdb4fadad4952de20a2ebcdab75-runc.FWU3iR.mount: Deactivated successfully. Jun 25 16:26:51.087888 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:26:51.088192 kernel: audit: type=1130 audit(1719332811.064:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.51:22-139.178.89.65:43838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:51.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.51:22-139.178.89.65:43838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:51.065145 systemd[1]: Started sshd@21-10.128.0.51:22-139.178.89.65:43838.service - OpenSSH per-connection server daemon (139.178.89.65:43838). Jun 25 16:26:51.358000 audit[5187]: USER_ACCT pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.381737 sshd[5187]: Accepted publickey for core from 139.178.89.65 port 43838 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:51.389150 sshd[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:51.389752 kernel: audit: type=1101 audit(1719332811.358:442): pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.389822 kernel: audit: type=1103 audit(1719332811.387:443): pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.387000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.413006 systemd-logind[1382]: New session 22 of user core. Jun 25 16:26:51.434288 kernel: audit: type=1006 audit(1719332811.387:444): pid=5187 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:26:51.434356 kernel: audit: type=1300 audit(1719332811.387:444): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5e789f10 a2=3 a3=7fbe401dd480 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:51.387000 audit[5187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5e789f10 a2=3 a3=7fbe401dd480 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:51.433035 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:26:51.387000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:51.475540 kernel: audit: type=1327 audit(1719332811.387:444): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:51.447000 audit[5187]: USER_START pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.518218 kernel: audit: type=1105 audit(1719332811.447:445): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.518499 kernel: audit: type=1103 audit(1719332811.462:446): pid=5190 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.462000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.738459 sshd[5187]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:51.739000 audit[5187]: USER_END pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.744554 systemd[1]: sshd@21-10.128.0.51:22-139.178.89.65:43838.service: Deactivated successfully. Jun 25 16:26:51.746250 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:26:51.760630 systemd-logind[1382]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:26:51.762452 systemd-logind[1382]: Removed session 22. Jun 25 16:26:51.773289 kernel: audit: type=1106 audit(1719332811.739:447): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.773477 kernel: audit: type=1104 audit(1719332811.739:448): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.739000 audit[5187]: CRED_DISP pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:51.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.51:22-139.178.89.65:43838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:56.789042 systemd[1]: Started sshd@22-10.128.0.51:22-139.178.89.65:38684.service - OpenSSH per-connection server daemon (139.178.89.65:38684). Jun 25 16:26:56.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.51:22-139.178.89.65:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:56.795475 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:26:56.795605 kernel: audit: type=1130 audit(1719332816.788:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.51:22-139.178.89.65:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:57.078000 audit[5205]: USER_ACCT pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.082029 sshd[5205]: Accepted publickey for core from 139.178.89.65 port 38684 ssh2: RSA SHA256:WoHyxObyBOp3GIG9aczlLaR07aaOBMuNcDhpNk/cWQg Jun 25 16:26:57.083939 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:26:57.096826 systemd-logind[1382]: New session 23 of user core. Jun 25 16:26:57.183008 kernel: audit: type=1101 audit(1719332817.078:451): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.183078 kernel: audit: type=1103 audit(1719332817.078:452): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.183122 kernel: audit: type=1006 audit(1719332817.078:453): pid=5205 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:26:57.183156 kernel: audit: type=1300 audit(1719332817.078:453): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff729d1ff0 a2=3 a3=7f5aa77d6480 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.183193 kernel: audit: type=1327 audit(1719332817.078:453): proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:57.078000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.078000 audit[5205]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff729d1ff0 a2=3 a3=7f5aa77d6480 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:26:57.078000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:26:57.182073 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:26:57.192000 audit[5205]: USER_START pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.227558 kernel: audit: type=1105 audit(1719332817.192:454): pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.227791 kernel: audit: type=1103 audit(1719332817.192:455): pid=5208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.192000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.468437 sshd[5205]: pam_unix(sshd:session): session closed for user core Jun 25 16:26:57.470000 audit[5205]: USER_END pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.524851 kernel: audit: type=1106 audit(1719332817.470:456): pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.525048 kernel: audit: type=1104 audit(1719332817.470:457): pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.470000 audit[5205]: CRED_DISP pid=5205 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:26:57.511000 systemd[1]: sshd@22-10.128.0.51:22-139.178.89.65:38684.service: Deactivated successfully. Jun 25 16:26:57.512730 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:26:57.529903 systemd-logind[1382]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:26:57.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.51:22-139.178.89.65:38684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:26:57.532061 systemd-logind[1382]: Removed session 23. Jun 25 16:26:58.745112 systemd[1]: run-containerd-runc-k8s.io-633e3ba82adf15d9fc5743c7ffec6e1a715e59ca9af6ab85e457ed2aa56ef388-runc.P0E0CX.mount: Deactivated successfully.