Jun 25 16:27:32.997866 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:27:32.997908 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:32.997928 kernel: BIOS-provided physical RAM map: Jun 25 16:27:32.997952 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:27:32.997958 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:27:32.997964 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:27:32.997972 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jun 25 16:27:32.997978 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jun 25 16:27:32.997984 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 16:27:32.997994 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:27:32.998004 kernel: NX (Execute Disable) protection: active Jun 25 16:27:32.998013 kernel: SMBIOS 2.8 present. Jun 25 16:27:32.998022 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jun 25 16:27:32.998031 kernel: Hypervisor detected: KVM Jun 25 16:27:32.998043 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:27:32.998056 kernel: kvm-clock: using sched offset of 4672223074 cycles Jun 25 16:27:32.998069 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:27:32.998076 kernel: tsc: Detected 1995.312 MHz processor Jun 25 16:27:32.998083 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:27:32.998090 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:27:32.998097 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jun 25 16:27:32.998104 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:27:32.998111 kernel: ACPI: Early table checksum verification disabled Jun 25 16:27:32.998118 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jun 25 16:27:32.998127 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998134 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998141 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998148 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jun 25 16:27:32.998154 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998161 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998168 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998176 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:27:32.998189 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jun 25 16:27:32.998200 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jun 25 16:27:32.998209 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jun 25 16:27:32.998219 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jun 25 16:27:32.998229 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jun 25 16:27:32.998235 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jun 25 16:27:32.998242 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jun 25 16:27:32.998249 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 16:27:32.998262 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 16:27:32.998269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 25 16:27:32.998278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 25 16:27:32.998290 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jun 25 16:27:32.998303 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jun 25 16:27:32.998315 kernel: Zone ranges: Jun 25 16:27:32.998323 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:27:32.998336 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jun 25 16:27:32.998346 kernel: Normal empty Jun 25 16:27:32.998357 kernel: Movable zone start for each node Jun 25 16:27:32.998369 kernel: Early memory node ranges Jun 25 16:27:32.998383 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:27:32.998394 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jun 25 16:27:32.998401 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jun 25 16:27:32.998408 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:27:32.998414 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:27:32.998425 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jun 25 16:27:32.998437 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 16:27:32.998448 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:27:32.998462 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:27:32.998475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:27:32.998488 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:27:32.998498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:27:32.998504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:27:32.998515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:27:32.998529 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:27:32.998539 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:27:32.998549 kernel: TSC deadline timer available Jun 25 16:27:32.998559 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 16:27:32.998570 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jun 25 16:27:32.998580 kernel: Booting paravirtualized kernel on KVM Jun 25 16:27:32.998590 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:27:32.998601 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 16:27:32.998610 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jun 25 16:27:32.998624 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jun 25 16:27:32.998634 kernel: pcpu-alloc: [0] 0 1 Jun 25 16:27:32.998644 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 25 16:27:32.998656 kernel: Fallback order for Node 0: 0 Jun 25 16:27:32.998666 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jun 25 16:27:32.998678 kernel: Policy zone: DMA32 Jun 25 16:27:32.998691 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:32.998702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:27:32.998717 kernel: random: crng init done Jun 25 16:27:32.998730 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:27:32.998737 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 16:27:32.998744 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:27:32.998752 kernel: Memory: 1967112K/2096600K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129228K reserved, 0K cma-reserved) Jun 25 16:27:32.998759 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 16:27:32.998766 kernel: Kernel/User page tables isolation: enabled Jun 25 16:27:32.998775 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:27:32.998786 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:27:32.998801 kernel: Dynamic Preempt: voluntary Jun 25 16:27:32.998812 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:27:32.998824 kernel: rcu: RCU event tracing is enabled. Jun 25 16:27:32.998833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 16:27:32.998841 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:27:32.998848 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:27:32.998855 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:27:32.998863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:27:32.998876 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 16:27:32.998886 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 16:27:32.998893 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:27:32.998900 kernel: Console: colour VGA+ 80x25 Jun 25 16:27:32.998907 kernel: printk: console [tty0] enabled Jun 25 16:27:32.998915 kernel: printk: console [ttyS0] enabled Jun 25 16:27:32.998921 kernel: ACPI: Core revision 20220331 Jun 25 16:27:32.998929 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:27:32.998958 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:27:32.998965 kernel: x2apic enabled Jun 25 16:27:32.998977 kernel: Switched APIC routing to physical x2apic. Jun 25 16:27:32.998984 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:27:32.998992 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jun 25 16:27:32.999000 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Jun 25 16:27:32.999012 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 25 16:27:32.999020 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 25 16:27:32.999027 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:27:32.999034 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:27:32.999041 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:27:32.999061 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:27:32.999072 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jun 25 16:27:32.999084 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:27:32.999100 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:27:32.999107 kernel: MDS: Mitigation: Clear CPU buffers Jun 25 16:27:32.999115 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 16:27:32.999122 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:27:32.999129 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:27:32.999137 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:27:32.999147 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:27:32.999155 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jun 25 16:27:32.999162 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:27:32.999170 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:27:32.999177 kernel: LSM: Security Framework initializing Jun 25 16:27:32.999184 kernel: SELinux: Initializing. Jun 25 16:27:32.999192 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:27:32.999201 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 16:27:32.999208 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jun 25 16:27:32.999216 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:32.999223 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:32.999230 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:32.999238 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:32.999245 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:27:32.999252 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 16:27:32.999259 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jun 25 16:27:32.999266 kernel: signal: max sigframe size: 1776 Jun 25 16:27:32.999276 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:27:32.999284 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:27:32.999291 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 16:27:32.999298 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:27:32.999308 kernel: x86: Booting SMP configuration: Jun 25 16:27:32.999319 kernel: .... node #0, CPUs: #1 Jun 25 16:27:32.999330 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 16:27:32.999344 kernel: smpboot: Max logical packages: 1 Jun 25 16:27:32.999357 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jun 25 16:27:32.999373 kernel: devtmpfs: initialized Jun 25 16:27:32.999387 kernel: x86/mm: Memory block size: 128MB Jun 25 16:27:32.999401 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:27:32.999414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 16:27:32.999428 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:27:32.999440 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:27:32.999452 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:27:32.999464 kernel: audit: type=2000 audit(1719332851.208:1): state=initialized audit_enabled=0 res=1 Jun 25 16:27:32.999471 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:27:32.999482 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:27:32.999490 kernel: cpuidle: using governor menu Jun 25 16:27:32.999497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:27:32.999504 kernel: dca service started, version 1.12.1 Jun 25 16:27:32.999512 kernel: PCI: Using configuration type 1 for base access Jun 25 16:27:32.999525 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:27:32.999535 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:27:32.999545 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:27:32.999574 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:27:32.999591 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:27:32.999648 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:27:32.999666 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:27:32.999679 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:27:32.999692 kernel: ACPI: Interpreter enabled Jun 25 16:27:32.999705 kernel: ACPI: PM: (supports S0 S5) Jun 25 16:27:32.999716 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:27:32.999728 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:27:32.999739 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:27:32.999750 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:27:32.999766 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:27:33.007158 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:27:33.007328 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 16:27:33.007459 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jun 25 16:27:33.007478 kernel: acpiphp: Slot [3] registered Jun 25 16:27:33.007492 kernel: acpiphp: Slot [4] registered Jun 25 16:27:33.007505 kernel: acpiphp: Slot [5] registered Jun 25 16:27:33.007531 kernel: acpiphp: Slot [6] registered Jun 25 16:27:33.007545 kernel: acpiphp: Slot [7] registered Jun 25 16:27:33.007572 kernel: acpiphp: Slot [8] registered Jun 25 16:27:33.007585 kernel: acpiphp: Slot [9] registered Jun 25 16:27:33.007598 kernel: acpiphp: Slot [10] registered Jun 25 16:27:33.007610 kernel: acpiphp: Slot [11] registered Jun 25 16:27:33.007622 kernel: acpiphp: Slot [12] registered Jun 25 16:27:33.007633 kernel: acpiphp: Slot [13] registered Jun 25 16:27:33.007647 kernel: acpiphp: Slot [14] registered Jun 25 16:27:33.007665 kernel: acpiphp: Slot [15] registered Jun 25 16:27:33.007679 kernel: acpiphp: Slot [16] registered Jun 25 16:27:33.007693 kernel: acpiphp: Slot [17] registered Jun 25 16:27:33.007705 kernel: acpiphp: Slot [18] registered Jun 25 16:27:33.007716 kernel: acpiphp: Slot [19] registered Jun 25 16:27:33.007727 kernel: acpiphp: Slot [20] registered Jun 25 16:27:33.007738 kernel: acpiphp: Slot [21] registered Jun 25 16:27:33.007749 kernel: acpiphp: Slot [22] registered Jun 25 16:27:33.007763 kernel: acpiphp: Slot [23] registered Jun 25 16:27:33.007777 kernel: acpiphp: Slot [24] registered Jun 25 16:27:33.007792 kernel: acpiphp: Slot [25] registered Jun 25 16:27:33.007804 kernel: acpiphp: Slot [26] registered Jun 25 16:27:33.007818 kernel: acpiphp: Slot [27] registered Jun 25 16:27:33.007832 kernel: acpiphp: Slot [28] registered Jun 25 16:27:33.007846 kernel: acpiphp: Slot [29] registered Jun 25 16:27:33.007859 kernel: acpiphp: Slot [30] registered Jun 25 16:27:33.007873 kernel: acpiphp: Slot [31] registered Jun 25 16:27:33.007886 kernel: PCI host bridge to bus 0000:00 Jun 25 16:27:33.008072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:27:33.008192 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:27:33.008298 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:27:33.008377 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 16:27:33.008452 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 16:27:33.008528 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:27:33.008670 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:27:33.008823 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:27:33.008964 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:27:33.009054 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jun 25 16:27:33.009140 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:27:33.009224 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:27:33.009309 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:27:33.009393 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:27:33.009502 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jun 25 16:27:33.009602 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jun 25 16:27:33.009693 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:27:33.009777 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 16:27:33.009861 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 16:27:33.010070 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jun 25 16:27:33.010202 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jun 25 16:27:33.010297 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jun 25 16:27:33.010385 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jun 25 16:27:33.010493 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 16:27:33.010581 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:27:33.010689 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:27:33.010791 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jun 25 16:27:33.010882 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jun 25 16:27:33.010986 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jun 25 16:27:33.011093 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:27:33.011196 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jun 25 16:27:33.011326 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jun 25 16:27:33.017103 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jun 25 16:27:33.017294 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jun 25 16:27:33.017396 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jun 25 16:27:33.017482 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jun 25 16:27:33.017570 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jun 25 16:27:33.017663 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:27:33.017755 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:27:33.017880 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jun 25 16:27:33.018041 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jun 25 16:27:33.018192 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:27:33.018289 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jun 25 16:27:33.018381 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jun 25 16:27:33.018473 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jun 25 16:27:33.018586 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jun 25 16:27:33.018678 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jun 25 16:27:33.018868 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jun 25 16:27:33.018882 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:27:33.018890 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:27:33.018898 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:27:33.018906 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:27:33.018914 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:27:33.018927 kernel: iommu: Default domain type: Translated Jun 25 16:27:33.018956 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:27:33.018974 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:27:33.018987 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:27:33.018997 kernel: PTP clock support registered Jun 25 16:27:33.019004 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:27:33.019012 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:27:33.019020 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:27:33.019028 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jun 25 16:27:33.019127 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:27:33.019219 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:27:33.019312 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:27:33.019322 kernel: vgaarb: loaded Jun 25 16:27:33.019330 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:27:33.019337 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:27:33.019345 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:27:33.019352 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:27:33.019360 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:27:33.019368 kernel: pnp: PnP ACPI init Jun 25 16:27:33.019382 kernel: pnp: PnP ACPI: found 4 devices Jun 25 16:27:33.019398 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:27:33.019407 kernel: NET: Registered PF_INET protocol family Jun 25 16:27:33.019415 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:27:33.019423 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 16:27:33.019431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:27:33.019439 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 16:27:33.019446 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 16:27:33.019454 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 16:27:33.019464 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:27:33.019471 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 16:27:33.019478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:27:33.019486 kernel: NET: Registered PF_XDP protocol family Jun 25 16:27:33.019608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:27:33.019733 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:27:33.019824 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:27:33.019906 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 16:27:33.020036 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 16:27:33.020140 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:27:33.021121 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:27:33.021145 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 16:27:33.021269 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 50147 usecs Jun 25 16:27:33.021280 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:27:33.021288 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 16:27:33.021297 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Jun 25 16:27:33.021304 kernel: Initialise system trusted keyrings Jun 25 16:27:33.021319 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 16:27:33.021327 kernel: Key type asymmetric registered Jun 25 16:27:33.021335 kernel: Asymmetric key parser 'x509' registered Jun 25 16:27:33.021342 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:27:33.021350 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:27:33.021358 kernel: io scheduler mq-deadline registered Jun 25 16:27:33.021366 kernel: io scheduler kyber registered Jun 25 16:27:33.021374 kernel: io scheduler bfq registered Jun 25 16:27:33.021381 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:27:33.021391 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 25 16:27:33.021400 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:27:33.021407 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:27:33.021415 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:27:33.021422 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:27:33.021430 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:27:33.021438 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:27:33.021445 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:27:33.021453 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:27:33.021565 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 25 16:27:33.021649 kernel: rtc_cmos 00:03: registered as rtc0 Jun 25 16:27:33.021730 kernel: rtc_cmos 00:03: setting system clock to 2024-06-25T16:27:32 UTC (1719332852) Jun 25 16:27:33.021823 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jun 25 16:27:33.021833 kernel: intel_pstate: CPU model not supported Jun 25 16:27:33.021841 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:27:33.021848 kernel: Segment Routing with IPv6 Jun 25 16:27:33.021856 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:27:33.021866 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:27:33.021874 kernel: Key type dns_resolver registered Jun 25 16:27:33.021881 kernel: IPI shorthand broadcast: enabled Jun 25 16:27:33.021889 kernel: sched_clock: Marking stable (1212243396, 139462188)->(1403885476, -52179892) Jun 25 16:27:33.021897 kernel: registered taskstats version 1 Jun 25 16:27:33.021904 kernel: Loading compiled-in X.509 certificates Jun 25 16:27:33.021912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:27:33.021919 kernel: Key type .fscrypt registered Jun 25 16:27:33.021929 kernel: Key type fscrypt-provisioning registered Jun 25 16:27:33.022034 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:27:33.022046 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:27:33.022056 kernel: ima: No architecture policies found Jun 25 16:27:33.022067 kernel: clk: Disabling unused clocks Jun 25 16:27:33.022098 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:27:33.022114 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:27:33.022126 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:27:33.022138 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:27:33.022152 kernel: Run /init as init process Jun 25 16:27:33.022164 kernel: with arguments: Jun 25 16:27:33.022175 kernel: /init Jun 25 16:27:33.022186 kernel: with environment: Jun 25 16:27:33.022197 kernel: HOME=/ Jun 25 16:27:33.022209 kernel: TERM=linux Jun 25 16:27:33.022223 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:27:33.022241 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:27:33.022258 systemd[1]: Detected virtualization kvm. Jun 25 16:27:33.022275 systemd[1]: Detected architecture x86-64. Jun 25 16:27:33.022289 systemd[1]: Running in initrd. Jun 25 16:27:33.022298 systemd[1]: No hostname configured, using default hostname. Jun 25 16:27:33.022306 systemd[1]: Hostname set to . Jun 25 16:27:33.022315 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:27:33.022323 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:27:33.022332 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:33.022343 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:33.022352 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:27:33.022360 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:27:33.022368 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:27:33.022376 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:27:33.022385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:27:33.022393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:27:33.022402 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:27:33.022414 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:27:33.022425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:27:33.022434 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:33.022442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:33.022451 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:33.022462 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:27:33.022471 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:27:33.022479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:27:33.022490 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:27:33.022499 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:27:33.022507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:27:33.022515 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:27:33.022524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:33.022539 systemd-journald[180]: Journal started Jun 25 16:27:33.022635 systemd-journald[180]: Runtime Journal (/run/log/journal/5f0be1fa6abd458fb7e79b837a134e57) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:27:33.017226 systemd-modules-load[181]: Inserted module 'overlay' Jun 25 16:27:33.055199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:27:33.055232 kernel: Bridge firewalling registered Jun 25 16:27:33.054059 systemd-modules-load[181]: Inserted module 'br_netfilter' Jun 25 16:27:33.060838 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:27:33.060869 kernel: audit: type=1130 audit(1719332853.054:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.067286 kernel: audit: type=1130 audit(1719332853.060:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.061678 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:27:33.078176 kernel: audit: type=1130 audit(1719332853.064:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.078213 kernel: audit: type=1130 audit(1719332853.065:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.065628 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:27:33.074581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:27:33.076275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:27:33.081709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:27:33.091414 kernel: SCSI subsystem initialized Jun 25 16:27:33.091508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:27:33.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.104992 kernel: audit: type=1130 audit(1719332853.091:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.113730 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:33.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.118442 kernel: audit: type=1130 audit(1719332853.113:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.118506 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:27:33.118526 kernel: audit: type=1334 audit(1719332853.114:8): prog-id=6 op=LOAD Jun 25 16:27:33.114000 audit: BPF prog-id=6 op=LOAD Jun 25 16:27:33.121977 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:27:33.122129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:27:33.135544 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:27:33.133582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:27:33.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.142107 kernel: audit: type=1130 audit(1719332853.135:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.142787 systemd-modules-load[181]: Inserted module 'dm_multipath' Jun 25 16:27:33.146231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:27:33.147113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:33.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.162966 kernel: audit: type=1130 audit(1719332853.151:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.158507 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:27:33.168045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:33.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.173329 systemd-resolved[198]: Positive Trust Anchors: Jun 25 16:27:33.173349 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:27:33.173382 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:27:33.176436 systemd-resolved[198]: Defaulting to hostname 'linux'. Jun 25 16:27:33.177642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:27:33.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.181547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:33.185810 dracut-cmdline[200]: dracut-dracut-053 Jun 25 16:27:33.190486 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:27:33.293973 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:27:33.315004 kernel: iscsi: registered transport (tcp) Jun 25 16:27:33.354331 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:27:33.354424 kernel: QLogic iSCSI HBA Driver Jun 25 16:27:33.414254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:27:33.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.423304 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:27:33.511022 kernel: raid6: avx2x4 gen() 20787 MB/s Jun 25 16:27:33.529011 kernel: raid6: avx2x2 gen() 20150 MB/s Jun 25 16:27:33.546340 kernel: raid6: avx2x1 gen() 22769 MB/s Jun 25 16:27:33.546431 kernel: raid6: using algorithm avx2x1 gen() 22769 MB/s Jun 25 16:27:33.565363 kernel: raid6: .... xor() 13315 MB/s, rmw enabled Jun 25 16:27:33.565441 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:27:33.570988 kernel: xor: automatically using best checksumming function avx Jun 25 16:27:33.778020 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:27:33.796502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:27:33.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.797000 audit: BPF prog-id=7 op=LOAD Jun 25 16:27:33.797000 audit: BPF prog-id=8 op=LOAD Jun 25 16:27:33.802302 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:33.830866 systemd-udevd[381]: Using default interface naming scheme 'v252'. Jun 25 16:27:33.836597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:33.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.848213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:27:33.865170 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Jun 25 16:27:33.912349 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:27:33.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:33.917263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:27:33.976292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:33.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:34.026973 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jun 25 16:27:34.082380 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jun 25 16:27:34.082596 kernel: scsi host0: Virtio SCSI HBA Jun 25 16:27:34.082732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:27:34.082744 kernel: GPT:9289727 != 125829119 Jun 25 16:27:34.082759 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:27:34.082773 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:27:34.082800 kernel: GPT:9289727 != 125829119 Jun 25 16:27:34.082815 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:27:34.082832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:27:34.084985 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jun 25 16:27:34.095437 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Jun 25 16:27:34.098822 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:27:34.098909 kernel: AES CTR mode by8 optimization enabled Jun 25 16:27:34.135490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (429) Jun 25 16:27:34.133984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:27:34.139964 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (427) Jun 25 16:27:34.151977 kernel: ACPI: bus type USB registered Jun 25 16:27:34.153989 kernel: usbcore: registered new interface driver usbfs Jun 25 16:27:34.157972 kernel: usbcore: registered new interface driver hub Jun 25 16:27:34.164664 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:27:34.171991 kernel: usbcore: registered new device driver usb Jun 25 16:27:34.176047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:27:34.179923 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:27:34.182693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:27:34.211975 kernel: libata version 3.00 loaded. Jun 25 16:27:34.213974 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jun 25 16:27:34.217909 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jun 25 16:27:34.218082 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jun 25 16:27:34.218180 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jun 25 16:27:34.218308 kernel: hub 1-0:1.0: USB hub found Jun 25 16:27:34.218448 kernel: hub 1-0:1.0: 2 ports detected Jun 25 16:27:34.219968 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:27:34.232669 kernel: scsi host1: ata_piix Jun 25 16:27:34.232974 kernel: scsi host2: ata_piix Jun 25 16:27:34.233171 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jun 25 16:27:34.233192 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jun 25 16:27:34.269484 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:27:34.277476 disk-uuid[507]: Primary Header is updated. Jun 25 16:27:34.277476 disk-uuid[507]: Secondary Entries is updated. Jun 25 16:27:34.277476 disk-uuid[507]: Secondary Header is updated. Jun 25 16:27:34.281460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:27:35.296972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:27:35.297140 disk-uuid[509]: The operation has completed successfully. Jun 25 16:27:35.346698 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:27:35.347806 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:27:35.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.357556 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:27:35.361835 sh[530]: Success Jun 25 16:27:35.380065 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 16:27:35.447888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:27:35.449766 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:27:35.453585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:27:35.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.480169 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:27:35.480256 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:35.482129 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:27:35.484957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:27:35.485039 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:27:35.497529 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:27:35.498449 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:27:35.506684 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:27:35.508388 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:27:35.526155 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:35.526219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:35.526231 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:27:35.541294 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:27:35.544980 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:35.552756 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:27:35.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.557224 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:27:35.674153 ignition[635]: Ignition 2.15.0 Jun 25 16:27:35.677305 ignition[635]: Stage: fetch-offline Jun 25 16:27:35.677455 ignition[635]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:35.677475 ignition[635]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:35.677657 ignition[635]: parsed url from cmdline: "" Jun 25 16:27:35.677664 ignition[635]: no config URL provided Jun 25 16:27:35.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.680211 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:27:35.677675 ignition[635]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:27:35.677688 ignition[635]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:27:35.677697 ignition[635]: failed to fetch config: resource requires networking Jun 25 16:27:35.677902 ignition[635]: Ignition finished successfully Jun 25 16:27:35.716988 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:27:35.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.718000 audit: BPF prog-id=9 op=LOAD Jun 25 16:27:35.725362 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:27:35.760365 systemd-networkd[716]: lo: Link UP Jun 25 16:27:35.760387 systemd-networkd[716]: lo: Gained carrier Jun 25 16:27:35.761927 systemd-networkd[716]: Enumeration completed Jun 25 16:27:35.763073 systemd-networkd[716]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:35.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.763086 systemd-networkd[716]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:27:35.763146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:27:35.764708 systemd[1]: Reached target network.target - Network. Jun 25 16:27:35.765076 systemd-networkd[716]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 25 16:27:35.765082 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jun 25 16:27:35.768120 systemd-networkd[716]: eth1: Link UP Jun 25 16:27:35.768127 systemd-networkd[716]: eth1: Gained carrier Jun 25 16:27:35.768142 systemd-networkd[716]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:27:35.773226 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 16:27:35.773982 systemd-networkd[716]: eth0: Link UP Jun 25 16:27:35.773988 systemd-networkd[716]: eth0: Gained carrier Jun 25 16:27:35.774004 systemd-networkd[716]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jun 25 16:27:35.781684 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:27:35.798083 systemd-networkd[716]: eth0: DHCPv4 address 164.92.91.188/20, gateway 164.92.80.1 acquired from 169.254.169.253 Jun 25 16:27:35.800244 ignition[718]: Ignition 2.15.0 Jun 25 16:27:35.800261 ignition[718]: Stage: fetch Jun 25 16:27:35.800412 ignition[718]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:35.800429 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:35.800546 ignition[718]: parsed url from cmdline: "" Jun 25 16:27:35.800550 ignition[718]: no config URL provided Jun 25 16:27:35.800556 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:27:35.804459 systemd-networkd[716]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Jun 25 16:27:35.800566 ignition[718]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:27:35.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.804573 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:27:35.800593 ignition[718]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jun 25 16:27:35.813054 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:27:35.822751 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:27:35.822751 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:27:35.822751 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:27:35.822751 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:27:35.822751 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:27:35.822751 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:27:35.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.822775 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:27:35.832350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:27:35.841078 ignition[718]: GET result: OK Jun 25 16:27:35.841246 ignition[718]: parsing config with SHA512: 4bbf467790b44a9841715ba8a2322c00cdb11294bc509b86ca1c70ad599ac359793d4f6033b17d655313a8574fd8cf3cea04c4ba90e545becba189ebe87e460a Jun 25 16:27:35.850842 unknown[718]: fetched base config from "system" Jun 25 16:27:35.851517 ignition[718]: fetch: fetch complete Jun 25 16:27:35.850858 unknown[718]: fetched base config from "system" Jun 25 16:27:35.851525 ignition[718]: fetch: fetch passed Jun 25 16:27:35.850868 unknown[718]: fetched user config from "digitalocean" Jun 25 16:27:35.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.851605 ignition[718]: Ignition finished successfully Jun 25 16:27:35.855702 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 16:27:35.862175 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:27:35.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.863881 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:27:35.866081 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:27:35.867499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:35.868168 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:27:35.876600 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:27:35.888521 ignition[736]: Ignition 2.15.0 Jun 25 16:27:35.889350 ignition[736]: Stage: kargs Jun 25 16:27:35.889511 ignition[736]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:35.889528 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:35.891406 ignition[736]: kargs: kargs passed Jun 25 16:27:35.891468 ignition[736]: Ignition finished successfully Jun 25 16:27:35.892894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:27:35.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.904875 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:27:35.906673 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:27:35.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.931286 ignition[748]: Ignition 2.15.0 Jun 25 16:27:35.931304 ignition[748]: Stage: disks Jun 25 16:27:35.931472 ignition[748]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:35.931489 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:35.933022 ignition[748]: disks: disks passed Jun 25 16:27:35.933963 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:27:35.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.933085 ignition[748]: Ignition finished successfully Jun 25 16:27:35.935139 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:27:35.936343 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:35.937655 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:27:35.938831 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:27:35.948757 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:27:35.956723 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:27:35.972782 systemd-fsck[756]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:27:35.976653 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:27:35.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:35.981736 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:27:36.092364 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:27:36.093317 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:27:36.094196 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:27:36.103190 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:27:36.107022 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:27:36.110556 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jun 25 16:27:36.120970 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (762) Jun 25 16:27:36.121007 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:36.121037 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:36.121054 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:27:36.131900 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 16:27:36.132773 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:27:36.132834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:27:36.136079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:27:36.140293 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:27:36.159437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:27:36.241634 initrd-setup-root[792]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:27:36.247035 coreos-metadata[764]: Jun 25 16:27:36.246 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:27:36.254453 initrd-setup-root[799]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:27:36.261495 coreos-metadata[764]: Jun 25 16:27:36.261 INFO Fetch successful Jun 25 16:27:36.269987 initrd-setup-root[806]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:27:36.270381 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jun 25 16:27:36.270524 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jun 25 16:27:36.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.285498 initrd-setup-root[813]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:27:36.304437 coreos-metadata[782]: Jun 25 16:27:36.304 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:27:36.315023 coreos-metadata[782]: Jun 25 16:27:36.314 INFO Fetch successful Jun 25 16:27:36.324471 coreos-metadata[782]: Jun 25 16:27:36.324 INFO wrote hostname ci-3815.2.4-a-1561673ea7 to /sysroot/etc/hostname Jun 25 16:27:36.328243 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:27:36.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.419678 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:27:36.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.427283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:27:36.430860 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:27:36.440980 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:36.463821 ignition[880]: INFO : Ignition 2.15.0 Jun 25 16:27:36.465223 ignition[880]: INFO : Stage: mount Jun 25 16:27:36.467214 ignition[880]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:36.468255 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:36.471475 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:27:36.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.473075 ignition[880]: INFO : mount: mount passed Jun 25 16:27:36.473075 ignition[880]: INFO : Ignition finished successfully Jun 25 16:27:36.474010 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:27:36.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.477555 kernel: kauditd_printk_skb: 30 callbacks suppressed Jun 25 16:27:36.477601 kernel: audit: type=1130 audit(1719332856.474:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:36.484181 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:27:36.488328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:27:36.503801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:27:36.513995 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (890) Jun 25 16:27:36.518811 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:27:36.518961 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:27:36.521434 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:27:36.528513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:27:36.561655 ignition[908]: INFO : Ignition 2.15.0 Jun 25 16:27:36.562736 ignition[908]: INFO : Stage: files Jun 25 16:27:36.563513 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:36.564409 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:36.566304 ignition[908]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:27:36.568662 ignition[908]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:27:36.569510 ignition[908]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:27:36.574278 ignition[908]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:27:36.575588 ignition[908]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:27:36.575588 ignition[908]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:27:36.574856 unknown[908]: wrote ssh authorized keys file for user: core Jun 25 16:27:36.579095 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:27:36.579095 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:27:36.607114 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:27:36.660262 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:27:36.660262 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:27:36.662528 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:27:37.027139 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:27:37.367160 systemd-networkd[716]: eth1: Gained IPv6LL Jun 25 16:27:37.446042 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:27:37.446042 ignition[908]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:27:37.448419 ignition[908]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:27:37.448419 ignition[908]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:27:37.448419 ignition[908]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:27:37.448419 ignition[908]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:27:37.448419 ignition[908]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:27:37.457417 kernel: audit: type=1130 audit(1719332857.451:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.457513 ignition[908]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:27:37.457513 ignition[908]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:27:37.457513 ignition[908]: INFO : files: files passed Jun 25 16:27:37.457513 ignition[908]: INFO : Ignition finished successfully Jun 25 16:27:37.450570 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:27:37.460486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:27:37.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.464136 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:27:37.475423 kernel: audit: type=1130 audit(1719332857.465:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.475465 kernel: audit: type=1131 audit(1719332857.465:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.465155 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:27:37.465266 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:27:37.483103 initrd-setup-root-after-ignition[934]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:37.483103 initrd-setup-root-after-ignition[934]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:37.486998 initrd-setup-root-after-ignition[938]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:27:37.490013 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:27:37.496148 kernel: audit: type=1130 audit(1719332857.489:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.490852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:27:37.495271 systemd-networkd[716]: eth0: Gained IPv6LL Jun 25 16:27:37.502281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:27:37.536713 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:27:37.537610 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:27:37.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.539152 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:27:37.549150 kernel: audit: type=1130 audit(1719332857.537:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.549199 kernel: audit: type=1131 audit(1719332857.537:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.547360 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:27:37.547984 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:27:37.554054 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:27:37.570785 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:27:37.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.577009 kernel: audit: type=1130 audit(1719332857.570:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.578352 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:27:37.590373 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:37.592252 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:37.593201 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:27:37.594304 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:27:37.600158 kernel: audit: type=1131 audit(1719332857.594:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.594451 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:27:37.595895 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:27:37.600923 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:27:37.602027 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:27:37.603230 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:27:37.604527 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:27:37.605628 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:27:37.606888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:27:37.608564 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:27:37.609671 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:27:37.610982 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:37.612373 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:27:37.618710 kernel: audit: type=1131 audit(1719332857.613:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.613380 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:27:37.613626 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:27:37.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.614850 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:37.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.619616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:27:37.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.619823 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:27:37.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.620791 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:27:37.620965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:27:37.621817 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:27:37.621980 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:27:37.623208 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 16:27:37.623343 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 16:27:37.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.630287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:27:37.636714 iscsid[727]: iscsid shutting down. Jun 25 16:27:37.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.631229 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:27:37.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.631799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:27:37.631983 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:37.634141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:27:37.634766 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:27:37.634925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:37.637474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:27:37.638288 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:27:37.644224 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:27:37.644351 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:27:37.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.659221 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:27:37.661067 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:27:37.661962 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:27:37.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.663622 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:27:37.664365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:27:37.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.667988 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:27:37.669676 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:27:37.670532 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:27:37.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.677275 ignition[952]: INFO : Ignition 2.15.0 Jun 25 16:27:37.677275 ignition[952]: INFO : Stage: umount Jun 25 16:27:37.679207 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:27:37.679207 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jun 25 16:27:37.679207 ignition[952]: INFO : umount: umount passed Jun 25 16:27:37.679207 ignition[952]: INFO : Ignition finished successfully Jun 25 16:27:37.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.680468 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:27:37.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.680639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:27:37.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.682227 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:27:37.682291 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:27:37.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.683198 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:27:37.683254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:27:37.684358 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 16:27:37.684404 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 16:27:37.685451 systemd[1]: Stopped target network.target - Network. Jun 25 16:27:37.686333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:27:37.686380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:27:37.687403 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:27:37.688492 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:27:37.693070 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:37.694111 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:27:37.695208 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:27:37.696719 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:27:37.696814 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:27:37.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.697833 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:27:37.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.697882 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:27:37.699082 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:27:37.699153 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:27:37.700386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:27:37.700455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:27:37.702235 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:27:37.703053 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:27:37.706047 systemd-networkd[716]: eth1: DHCPv6 lease lost Jun 25 16:27:37.709283 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:27:37.709475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:27:37.710113 systemd-networkd[716]: eth0: DHCPv6 lease lost Jun 25 16:27:37.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.713642 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:27:37.713795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:27:37.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.714000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:27:37.715390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:27:37.715439 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:37.722000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:27:37.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.724060 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:27:37.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.724700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:27:37.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.724801 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:27:37.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.725615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:27:37.725669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:37.727155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:27:37.727231 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:37.728348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:27:37.728410 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:37.734041 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:37.736786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:27:37.736903 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:27:37.744585 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:27:37.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.744894 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:37.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.746584 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:27:37.746710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:27:37.747653 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:27:37.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.747703 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:37.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.750077 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:27:37.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.750146 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:37.751251 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:27:37.751332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:27:37.752769 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:27:37.752843 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:27:37.754202 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:27:37.754276 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:27:37.767395 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:27:37.771069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:27:37.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.771196 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:27:37.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:37.773112 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:27:37.773236 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:27:37.774017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:27:37.784236 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:27:37.796723 systemd[1]: Switching root. Jun 25 16:27:37.818922 systemd-journald[180]: Journal stopped Jun 25 16:27:39.110256 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jun 25 16:27:39.110344 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:27:39.110384 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:27:39.110414 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:27:39.110426 kernel: SELinux: policy capability open_perms=1 Jun 25 16:27:39.110442 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:27:39.110454 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:27:39.110470 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:27:39.117073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:27:39.117104 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:27:39.117124 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:27:39.117193 systemd[1]: Successfully loaded SELinux policy in 65.324ms. Jun 25 16:27:39.117250 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.089ms. Jun 25 16:27:39.117282 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:27:39.117305 systemd[1]: Detected virtualization kvm. Jun 25 16:27:39.117323 systemd[1]: Detected architecture x86-64. Jun 25 16:27:39.117347 systemd[1]: Detected first boot. Jun 25 16:27:39.117367 systemd[1]: Hostname set to . Jun 25 16:27:39.117386 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:27:39.117416 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:27:39.117441 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:27:39.117460 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:27:39.117481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:27:39.117509 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:27:39.117530 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:27:39.117549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:27:39.117580 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:27:39.117601 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:27:39.117622 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:27:39.117644 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:27:39.117664 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:27:39.117684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:27:39.117706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:27:39.117724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:27:39.117747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:27:39.117779 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:27:39.117820 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:27:39.117833 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:27:39.117844 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:27:39.117856 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:27:39.117869 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:27:39.117889 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:27:39.117901 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:27:39.117918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:27:39.117931 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:27:39.117962 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:27:39.117978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:27:39.117996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:27:39.118015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:27:39.118034 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:27:39.118071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:27:39.118091 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:27:39.118110 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:27:39.118135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:39.118155 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:27:39.118176 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:27:39.118196 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:27:39.140672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:27:39.140764 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:39.140815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:27:39.140837 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:27:39.140858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:39.140877 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:27:39.140898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:39.140919 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:27:39.140958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:39.140977 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:27:39.141012 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:27:39.141032 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:27:39.141050 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:27:39.141069 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:27:39.141088 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:27:39.141107 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:27:39.141125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:27:39.141145 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:27:39.141165 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:27:39.141196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:27:39.141215 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:27:39.141235 systemd[1]: Stopped verity-setup.service. Jun 25 16:27:39.141256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:39.141274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:27:39.141293 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:27:39.141311 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:27:39.145307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:27:39.145355 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:27:39.145416 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:27:39.145442 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:27:39.145462 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:27:39.145480 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:27:39.145504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:39.145523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:39.145541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:39.145559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:39.145590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:27:39.145612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:27:39.145631 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:27:39.145649 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:27:39.145667 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:27:39.145688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:27:39.145718 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:27:39.145739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:39.145759 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:27:39.145777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:27:39.145796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:27:39.145815 kernel: loop: module loaded Jun 25 16:27:39.145837 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:27:39.145866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:27:39.145884 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:39.145902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:39.149054 systemd-journald[1051]: Journal started Jun 25 16:27:39.149233 systemd-journald[1051]: Runtime Journal (/run/log/journal/5f0be1fa6abd458fb7e79b837a134e57) is 4.9M, max 39.3M, 34.4M free. Jun 25 16:27:39.149310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:39.149347 kernel: fuse: init (API version 7.37) Jun 25 16:27:37.959000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:27:38.031000 audit: BPF prog-id=10 op=LOAD Jun 25 16:27:38.032000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:27:38.032000 audit: BPF prog-id=11 op=LOAD Jun 25 16:27:38.032000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:27:38.829000 audit: BPF prog-id=12 op=LOAD Jun 25 16:27:38.829000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:27:38.829000 audit: BPF prog-id=13 op=LOAD Jun 25 16:27:38.829000 audit: BPF prog-id=14 op=LOAD Jun 25 16:27:38.829000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:27:38.829000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:27:38.830000 audit: BPF prog-id=15 op=LOAD Jun 25 16:27:38.830000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:27:38.830000 audit: BPF prog-id=16 op=LOAD Jun 25 16:27:38.830000 audit: BPF prog-id=17 op=LOAD Jun 25 16:27:38.830000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:27:38.830000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:27:38.831000 audit: BPF prog-id=18 op=LOAD Jun 25 16:27:38.831000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:27:38.831000 audit: BPF prog-id=19 op=LOAD Jun 25 16:27:38.831000 audit: BPF prog-id=20 op=LOAD Jun 25 16:27:39.149985 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:27:38.832000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:27:38.832000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:27:38.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.841000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:27:38.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.005000 audit: BPF prog-id=21 op=LOAD Jun 25 16:27:39.005000 audit: BPF prog-id=22 op=LOAD Jun 25 16:27:39.005000 audit: BPF prog-id=23 op=LOAD Jun 25 16:27:39.005000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:27:39.005000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:27:39.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.096000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:27:39.096000 audit[1051]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff79981540 a2=4000 a3=7fff799815dc items=0 ppid=1 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:39.096000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:27:39.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.817574 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:27:39.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.171865 systemd-journald[1051]: Time spent on flushing to /var/log/journal/5f0be1fa6abd458fb7e79b837a134e57 is 25.142ms for 1114 entries. Jun 25 16:27:39.171865 systemd-journald[1051]: System Journal (/var/log/journal/5f0be1fa6abd458fb7e79b837a134e57) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:27:39.231736 systemd-journald[1051]: Received client request to flush runtime journal. Jun 25 16:27:39.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.817607 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:27:39.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:38.833541 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:27:39.156388 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:27:39.166996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:27:39.167327 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:27:39.174211 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:27:39.181534 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:27:39.202329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:27:39.233155 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:27:39.248472 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:27:39.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.254345 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:27:39.297437 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:27:39.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.306009 kernel: ACPI: bus type drm_connector registered Jun 25 16:27:39.306875 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:27:39.307133 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:27:39.308368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:27:39.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.314379 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:27:39.329073 udevadm[1088]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:27:39.989256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:27:39.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:39.989000 audit: BPF prog-id=24 op=LOAD Jun 25 16:27:39.989000 audit: BPF prog-id=25 op=LOAD Jun 25 16:27:39.989000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:27:39.989000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:27:39.996519 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:27:40.021579 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Jun 25 16:27:40.047308 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:27:40.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.048000 audit: BPF prog-id=26 op=LOAD Jun 25 16:27:40.054175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:27:40.059000 audit: BPF prog-id=27 op=LOAD Jun 25 16:27:40.059000 audit: BPF prog-id=28 op=LOAD Jun 25 16:27:40.059000 audit: BPF prog-id=29 op=LOAD Jun 25 16:27:40.065200 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:27:40.125978 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1095) Jun 25 16:27:40.137564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:27:40.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.190845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:40.191178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:40.197599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:40.200314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:40.209232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:40.209953 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:27:40.210153 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:27:40.210401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:40.211327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:40.211654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:40.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.215564 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:40.215736 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:40.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.216491 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:40.216912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:40.217083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:40.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.217763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:40.244979 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1091) Jun 25 16:27:40.264584 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:27:40.274272 systemd-networkd[1093]: lo: Link UP Jun 25 16:27:40.274991 systemd-networkd[1093]: lo: Gained carrier Jun 25 16:27:40.276055 systemd-networkd[1093]: Enumeration completed Jun 25 16:27:40.276366 systemd-networkd[1093]: eth1: Configuring with /run/systemd/network/10-22:c9:d5:4b:c9:44.network. Jun 25 16:27:40.276408 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:27:40.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.277923 systemd-networkd[1093]: eth0: Configuring with /run/systemd/network/10-a2:67:5b:f2:0f:0a.network. Jun 25 16:27:40.279195 systemd-networkd[1093]: eth1: Link UP Jun 25 16:27:40.279358 systemd-networkd[1093]: eth1: Gained carrier Jun 25 16:27:40.283319 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:27:40.285470 systemd-networkd[1093]: eth0: Link UP Jun 25 16:27:40.285482 systemd-networkd[1093]: eth0: Gained carrier Jun 25 16:27:40.345975 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 16:27:40.366980 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 25 16:27:40.382981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 16:27:40.394122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:27:40.409051 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:27:40.423987 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:27:40.582981 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 25 16:27:40.592821 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 25 16:27:40.604625 kernel: Console: switching to colour dummy device 80x25 Jun 25 16:27:40.612065 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 25 16:27:40.612202 kernel: [drm] features: -context_init Jun 25 16:27:40.613250 kernel: [drm] number of scanouts: 1 Jun 25 16:27:40.613320 kernel: [drm] number of cap sets: 0 Jun 25 16:27:40.617988 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jun 25 16:27:40.624675 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 25 16:27:40.624806 kernel: virtio-pci 0000:00:02.0: [drm] drm_plane_enable_fb_damage_clips() not called Jun 25 16:27:40.625137 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 16:27:40.630989 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 25 16:27:40.652984 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:27:40.687690 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:27:40.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.691396 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:27:40.706196 lvm[1131]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:27:40.732683 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:27:40.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.733266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:27:40.738453 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:27:40.745814 lvm[1132]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:27:40.775902 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:27:40.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.776662 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:27:40.781385 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jun 25 16:27:40.781989 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:27:40.782105 systemd[1]: Reached target machines.target - Containers. Jun 25 16:27:40.785775 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:27:40.801336 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:27:40.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:40.805984 kernel: ISO 9660 Extensions: RRIP_1991A Jun 25 16:27:40.808536 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jun 25 16:27:40.809450 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:27:40.817312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:27:40.819071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:40.819218 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:40.825773 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:27:40.836302 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:27:40.840299 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:27:40.841112 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1138 (bootctl) Jun 25 16:27:40.844213 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:27:40.900981 kernel: loop0: detected capacity change from 0 to 209816 Jun 25 16:27:40.996915 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:27:40.997881 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:27:40.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.018157 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:27:41.043048 kernel: loop1: detected capacity change from 0 to 80584 Jun 25 16:27:41.044155 systemd-fsck[1143]: fsck.fat 4.2 (2021-01-31) Jun 25 16:27:41.044155 systemd-fsck[1143]: /dev/vda1: 808 files, 120378/258078 clusters Jun 25 16:27:41.049505 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:27:41.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.056477 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:27:41.081116 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:27:41.111360 kernel: loop2: detected capacity change from 0 to 8 Jun 25 16:27:41.113752 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:27:41.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.225439 kernel: loop3: detected capacity change from 0 to 139360 Jun 25 16:27:41.359120 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 16:27:41.373980 kernel: loop5: detected capacity change from 0 to 80584 Jun 25 16:27:41.389968 kernel: loop6: detected capacity change from 0 to 8 Jun 25 16:27:41.393987 kernel: loop7: detected capacity change from 0 to 139360 Jun 25 16:27:41.419889 (sd-sysext)[1150]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jun 25 16:27:41.420863 (sd-sysext)[1150]: Merged extensions into '/usr'. Jun 25 16:27:41.423981 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:27:41.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.429262 systemd[1]: Starting ensure-sysext.service... Jun 25 16:27:41.433497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:27:41.477773 systemd[1]: Reloading. Jun 25 16:27:41.526767 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:27:41.535438 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:27:41.541306 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:27:41.548809 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:27:41.630067 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:27:41.719172 systemd-networkd[1093]: eth1: Gained IPv6LL Jun 25 16:27:41.800213 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:27:41.879823 kernel: kauditd_printk_skb: 124 callbacks suppressed Jun 25 16:27:41.880001 kernel: audit: type=1334 audit(1719332861.874:173): prog-id=30 op=LOAD Jun 25 16:27:41.880026 kernel: audit: type=1334 audit(1719332861.874:174): prog-id=31 op=LOAD Jun 25 16:27:41.880043 kernel: audit: type=1334 audit(1719332861.874:175): prog-id=24 op=UNLOAD Jun 25 16:27:41.874000 audit: BPF prog-id=30 op=LOAD Jun 25 16:27:41.874000 audit: BPF prog-id=31 op=LOAD Jun 25 16:27:41.874000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:27:41.874000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:27:41.886735 kernel: audit: type=1334 audit(1719332861.874:176): prog-id=25 op=UNLOAD Jun 25 16:27:41.886885 kernel: audit: type=1334 audit(1719332861.875:177): prog-id=32 op=LOAD Jun 25 16:27:41.875000 audit: BPF prog-id=32 op=LOAD Jun 25 16:27:41.884606 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:27:41.886149 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:27:41.887309 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:27:41.875000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:27:41.877000 audit: BPF prog-id=33 op=LOAD Jun 25 16:27:41.891119 kernel: audit: type=1334 audit(1719332861.875:178): prog-id=26 op=UNLOAD Jun 25 16:27:41.891264 kernel: audit: type=1334 audit(1719332861.877:179): prog-id=33 op=LOAD Jun 25 16:27:41.877000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:27:41.894981 kernel: audit: type=1334 audit(1719332861.877:180): prog-id=21 op=UNLOAD Jun 25 16:27:41.897677 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:41.877000 audit: BPF prog-id=34 op=LOAD Jun 25 16:27:41.877000 audit: BPF prog-id=35 op=LOAD Jun 25 16:27:41.877000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:27:41.877000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:27:41.879000 audit: BPF prog-id=36 op=LOAD Jun 25 16:27:41.879000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:27:41.880000 audit: BPF prog-id=37 op=LOAD Jun 25 16:27:41.880000 audit: BPF prog-id=38 op=LOAD Jun 25 16:27:41.880000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:27:41.880000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:27:41.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.901975 kernel: audit: type=1334 audit(1719332861.877:181): prog-id=34 op=LOAD Jun 25 16:27:41.902032 kernel: audit: type=1334 audit(1719332861.877:182): prog-id=35 op=LOAD Jun 25 16:27:41.916216 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:27:41.919990 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:27:41.922000 audit: BPF prog-id=39 op=LOAD Jun 25 16:27:41.925262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:27:41.932000 audit: BPF prog-id=40 op=LOAD Jun 25 16:27:41.943329 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:27:41.952404 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:27:41.964281 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:41.964512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:41.970543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:41.974321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:41.975104 systemd-networkd[1093]: eth0: Gained IPv6LL Jun 25 16:27:41.985000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.987024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:41.989479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:41.989962 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:41.990079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:41.991378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:41.991635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:41.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.995191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:41.995411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:41.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:41.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.000590 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:42.000817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:42.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.013290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:42.013718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:42.020688 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:42.025555 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:27:42.040983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:27:42.043826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:42.044137 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:42.044458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:42.046637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:27:42.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.051913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:42.052265 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:42.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.064170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:42.064598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:27:42.073224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:27:42.082648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:27:42.084023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:27:42.084335 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:42.085586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:27:42.088415 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:27:42.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.094478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:27:42.094765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:27:42.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.100152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:27:42.100388 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:27:42.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.109408 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:27:42.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.113609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:27:42.113780 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:27:42.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.117108 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:27:42.117328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:27:42.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.121920 systemd[1]: Finished ensure-sysext.service. Jun 25 16:27:42.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:42.126303 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:27:42.126415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:27:42.134387 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:27:42.135922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:27:42.140000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:27:42.140000 audit[1243]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff0a54f40 a2=420 a3=0 items=0 ppid=1215 pid=1243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:42.140000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:27:42.141597 augenrules[1243]: No rules Jun 25 16:27:42.142528 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:42.166516 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:27:42.190204 systemd-resolved[1224]: Positive Trust Anchors: Jun 25 16:27:42.190224 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:27:42.190254 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:27:42.195305 systemd-resolved[1224]: Using system hostname 'ci-3815.2.4-a-1561673ea7'. Jun 25 16:27:42.197874 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:27:42.198496 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:27:42.200188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:27:42.200670 systemd[1]: Reached target network.target - Network. Jun 25 16:27:42.201038 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:27:42.201419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:27:42.201750 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:27:42.202226 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:27:42.202674 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:27:42.203254 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:27:42.203787 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:27:42.204194 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:27:42.204556 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:27:42.204598 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:27:42.205041 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:27:42.206602 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:27:42.213202 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:27:42.219747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:27:42.221649 systemd-timesyncd[1225]: Contacted time server 173.230.154.254:123 (0.flatcar.pool.ntp.org). Jun 25 16:27:42.222165 systemd-timesyncd[1225]: Initial clock synchronization to Tue 2024-06-25 16:27:42.121716 UTC. Jun 25 16:27:42.225808 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:42.226813 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:27:42.227854 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:27:42.228509 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:27:42.228990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:27:42.229022 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:27:42.241302 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:27:42.248668 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 16:27:42.253926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:27:42.265309 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:27:42.269886 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:27:42.274197 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:27:42.280918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:42.293214 extend-filesystems[1258]: Found loop4 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found loop5 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found loop6 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found loop7 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda1 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda2 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda3 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found usr Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda4 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda6 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda7 Jun 25 16:27:42.303246 extend-filesystems[1258]: Found vda9 Jun 25 16:27:42.294218 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:27:42.332900 jq[1257]: false Jun 25 16:27:42.333235 extend-filesystems[1258]: Checking size of /dev/vda9 Jun 25 16:27:42.309397 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:27:42.323290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:27:42.340344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:27:42.344630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:27:42.355690 extend-filesystems[1258]: Resized partition /dev/vda9 Jun 25 16:27:42.360753 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:27:42.367900 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:27:42.368134 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:27:42.369087 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:27:42.371235 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:27:42.379126 extend-filesystems[1274]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:27:42.422139 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jun 25 16:27:42.380301 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:27:42.382988 dbus-daemon[1255]: [system] SELinux support is enabled Jun 25 16:27:42.389184 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:27:42.406236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:27:42.406530 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:27:42.415130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:27:42.415211 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:27:42.422734 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:27:42.422961 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jun 25 16:27:42.423014 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:27:42.445104 jq[1275]: true Jun 25 16:27:42.478482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:27:42.478755 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:27:42.503190 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jun 25 16:27:42.512974 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1096) Jun 25 16:27:42.528779 jq[1280]: true Jun 25 16:27:42.530514 extend-filesystems[1274]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:27:42.530514 extend-filesystems[1274]: old_desc_blocks = 1, new_desc_blocks = 8 Jun 25 16:27:42.530514 extend-filesystems[1274]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jun 25 16:27:42.570712 extend-filesystems[1258]: Resized filesystem in /dev/vda9 Jun 25 16:27:42.570712 extend-filesystems[1258]: Found vdb Jun 25 16:27:42.574089 tar[1278]: linux-amd64/helm Jun 25 16:27:42.532337 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:27:42.532593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:27:42.607603 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:27:42.607875 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:27:42.618993 update_engine[1273]: I0625 16:27:42.618851 1273 main.cc:92] Flatcar Update Engine starting Jun 25 16:27:42.631641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:27:42.640429 update_engine[1273]: I0625 16:27:42.640236 1273 update_check_scheduler.cc:74] Next update check in 4m5s Jun 25 16:27:42.644071 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:27:42.651370 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:27:42.685686 coreos-metadata[1253]: Jun 25 16:27:42.684 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:27:42.712052 coreos-metadata[1253]: Jun 25 16:27:42.711 INFO Fetch successful Jun 25 16:27:42.761526 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 16:27:42.762485 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:27:42.791575 bash[1312]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:27:42.793067 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:27:42.804374 systemd[1]: Starting sshkeys.service... Jun 25 16:27:42.822916 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 16:27:42.840740 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 16:27:42.903063 systemd-logind[1272]: New seat seat0. Jun 25 16:27:42.933047 systemd-logind[1272]: Watching system buttons on /dev/input/event2 (Power Button) Jun 25 16:27:42.933725 systemd-logind[1272]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:27:42.935045 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:27:43.147042 coreos-metadata[1315]: Jun 25 16:27:43.146 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jun 25 16:27:43.170964 coreos-metadata[1315]: Jun 25 16:27:43.169 INFO Fetch successful Jun 25 16:27:43.186137 unknown[1315]: wrote ssh authorized keys file for user: core Jun 25 16:27:43.213587 update-ssh-keys[1325]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:27:43.214438 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 16:27:43.217406 systemd[1]: Finished sshkeys.service. Jun 25 16:27:43.291012 locksmithd[1301]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:27:43.511601 containerd[1279]: time="2024-06-25T16:27:43.511388680Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:27:43.695950 containerd[1279]: time="2024-06-25T16:27:43.695852499Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:27:43.698071 containerd[1279]: time="2024-06-25T16:27:43.698014574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.706372 containerd[1279]: time="2024-06-25T16:27:43.706304705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:43.706372 containerd[1279]: time="2024-06-25T16:27:43.706361019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.706791 containerd[1279]: time="2024-06-25T16:27:43.706749189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:43.706791 containerd[1279]: time="2024-06-25T16:27:43.706791832Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:27:43.707006 containerd[1279]: time="2024-06-25T16:27:43.706977897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707105 containerd[1279]: time="2024-06-25T16:27:43.707081162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707151 containerd[1279]: time="2024-06-25T16:27:43.707107443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707248 containerd[1279]: time="2024-06-25T16:27:43.707223481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707562 containerd[1279]: time="2024-06-25T16:27:43.707536001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707614 containerd[1279]: time="2024-06-25T16:27:43.707571970Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:27:43.707614 containerd[1279]: time="2024-06-25T16:27:43.707590632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707822 containerd[1279]: time="2024-06-25T16:27:43.707793153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:27:43.707879 containerd[1279]: time="2024-06-25T16:27:43.707824375Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:27:43.707993 containerd[1279]: time="2024-06-25T16:27:43.707969619Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:27:43.708049 containerd[1279]: time="2024-06-25T16:27:43.707997154Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:27:43.726175 containerd[1279]: time="2024-06-25T16:27:43.726054836Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:27:43.726371 containerd[1279]: time="2024-06-25T16:27:43.726195475Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:27:43.726371 containerd[1279]: time="2024-06-25T16:27:43.726223406Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:27:43.726371 containerd[1279]: time="2024-06-25T16:27:43.726300804Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:27:43.726494 containerd[1279]: time="2024-06-25T16:27:43.726445450Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:27:43.726494 containerd[1279]: time="2024-06-25T16:27:43.726485601Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:27:43.726589 containerd[1279]: time="2024-06-25T16:27:43.726508244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:27:43.726875 containerd[1279]: time="2024-06-25T16:27:43.726843952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:27:43.726962 containerd[1279]: time="2024-06-25T16:27:43.726896261Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:27:43.726962 containerd[1279]: time="2024-06-25T16:27:43.726923488Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:27:43.727045 containerd[1279]: time="2024-06-25T16:27:43.726982181Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:27:43.727086 containerd[1279]: time="2024-06-25T16:27:43.727023834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727086 containerd[1279]: time="2024-06-25T16:27:43.727072535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727166 containerd[1279]: time="2024-06-25T16:27:43.727094882Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727166 containerd[1279]: time="2024-06-25T16:27:43.727134693Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727166 containerd[1279]: time="2024-06-25T16:27:43.727157877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727268 containerd[1279]: time="2024-06-25T16:27:43.727182560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727268 containerd[1279]: time="2024-06-25T16:27:43.727220196Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.727268 containerd[1279]: time="2024-06-25T16:27:43.727241231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:27:43.727552 containerd[1279]: time="2024-06-25T16:27:43.727503290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:27:43.728149 containerd[1279]: time="2024-06-25T16:27:43.728089380Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:27:43.728236 containerd[1279]: time="2024-06-25T16:27:43.728208515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728293 containerd[1279]: time="2024-06-25T16:27:43.728236436Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:27:43.728338 containerd[1279]: time="2024-06-25T16:27:43.728293108Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:27:43.728422 containerd[1279]: time="2024-06-25T16:27:43.728400509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728558 containerd[1279]: time="2024-06-25T16:27:43.728535951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728602 containerd[1279]: time="2024-06-25T16:27:43.728569806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728643 containerd[1279]: time="2024-06-25T16:27:43.728609209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728643 containerd[1279]: time="2024-06-25T16:27:43.728631743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728725 containerd[1279]: time="2024-06-25T16:27:43.728655355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728725 containerd[1279]: time="2024-06-25T16:27:43.728694248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728725 containerd[1279]: time="2024-06-25T16:27:43.728713098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.728832 containerd[1279]: time="2024-06-25T16:27:43.728732822Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:27:43.729132 containerd[1279]: time="2024-06-25T16:27:43.729082677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729192 containerd[1279]: time="2024-06-25T16:27:43.729140903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729240 containerd[1279]: time="2024-06-25T16:27:43.729186438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729280 containerd[1279]: time="2024-06-25T16:27:43.729236688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729280 containerd[1279]: time="2024-06-25T16:27:43.729257537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729346 containerd[1279]: time="2024-06-25T16:27:43.729304823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729346 containerd[1279]: time="2024-06-25T16:27:43.729326505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.729417 containerd[1279]: time="2024-06-25T16:27:43.729344141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:27:43.730000 containerd[1279]: time="2024-06-25T16:27:43.729836257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:27:43.730415 containerd[1279]: time="2024-06-25T16:27:43.729997752Z" level=info msg="Connect containerd service" Jun 25 16:27:43.730415 containerd[1279]: time="2024-06-25T16:27:43.730075176Z" level=info msg="using legacy CRI server" Jun 25 16:27:43.730415 containerd[1279]: time="2024-06-25T16:27:43.730088346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:27:43.730415 containerd[1279]: time="2024-06-25T16:27:43.730235360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:27:43.733823 containerd[1279]: time="2024-06-25T16:27:43.733766455Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:27:43.737167 containerd[1279]: time="2024-06-25T16:27:43.737102831Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:27:43.737336 containerd[1279]: time="2024-06-25T16:27:43.737180074Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:27:43.737336 containerd[1279]: time="2024-06-25T16:27:43.737204382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:27:43.737336 containerd[1279]: time="2024-06-25T16:27:43.737225059Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:27:43.737838 containerd[1279]: time="2024-06-25T16:27:43.737803317Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:27:43.737962 containerd[1279]: time="2024-06-25T16:27:43.737885087Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:27:43.751767 containerd[1279]: time="2024-06-25T16:27:43.751668392Z" level=info msg="Start subscribing containerd event" Jun 25 16:27:43.752080 containerd[1279]: time="2024-06-25T16:27:43.752052754Z" level=info msg="Start recovering state" Jun 25 16:27:43.752279 containerd[1279]: time="2024-06-25T16:27:43.752257805Z" level=info msg="Start event monitor" Jun 25 16:27:43.752372 containerd[1279]: time="2024-06-25T16:27:43.752349790Z" level=info msg="Start snapshots syncer" Jun 25 16:27:43.752453 containerd[1279]: time="2024-06-25T16:27:43.752439113Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:27:43.752523 containerd[1279]: time="2024-06-25T16:27:43.752501644Z" level=info msg="Start streaming server" Jun 25 16:27:43.752752 containerd[1279]: time="2024-06-25T16:27:43.752724484Z" level=info msg="containerd successfully booted in 0.244345s" Jun 25 16:27:43.753231 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:27:43.988098 tar[1278]: linux-amd64/LICENSE Jun 25 16:27:43.988098 tar[1278]: linux-amd64/README.md Jun 25 16:27:43.999977 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:27:44.491370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:44.663851 sshd_keygen[1292]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:27:44.706193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:27:44.712749 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:27:44.726106 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:27:44.726343 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:27:44.735518 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:27:44.757377 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:27:44.764729 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:27:44.777602 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:27:44.783659 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:27:44.785607 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:27:44.798454 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:27:44.814759 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:27:44.815074 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:27:44.819796 systemd[1]: Startup finished in 1.426s (kernel) + 5.152s (initrd) + 6.924s (userspace) = 13.502s. Jun 25 16:27:45.410302 kubelet[1335]: E0625 16:27:45.410135 1335 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:27:45.413281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:27:45.413453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:27:45.413760 systemd[1]: kubelet.service: Consumed 1.534s CPU time. Jun 25 16:27:50.914147 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:27:50.921876 systemd[1]: Started sshd@0-164.92.91.188:22-139.178.89.65:33178.service - OpenSSH per-connection server daemon (139.178.89.65:33178). Jun 25 16:27:50.997557 sshd[1358]: Accepted publickey for core from 139.178.89.65 port 33178 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.000664 sshd[1358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.017150 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:27:51.025697 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:27:51.031619 systemd-logind[1272]: New session 1 of user core. Jun 25 16:27:51.046097 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:27:51.054422 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:27:51.060101 (systemd)[1361]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.179433 systemd[1361]: Queued start job for default target default.target. Jun 25 16:27:51.189691 systemd[1361]: Reached target paths.target - Paths. Jun 25 16:27:51.189721 systemd[1361]: Reached target sockets.target - Sockets. Jun 25 16:27:51.189736 systemd[1361]: Reached target timers.target - Timers. Jun 25 16:27:51.189749 systemd[1361]: Reached target basic.target - Basic System. Jun 25 16:27:51.189808 systemd[1361]: Reached target default.target - Main User Target. Jun 25 16:27:51.189844 systemd[1361]: Startup finished in 120ms. Jun 25 16:27:51.190002 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:27:51.191575 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:27:51.281322 systemd[1]: Started sshd@1-164.92.91.188:22-139.178.89.65:33190.service - OpenSSH per-connection server daemon (139.178.89.65:33190). Jun 25 16:27:51.321682 sshd[1370]: Accepted publickey for core from 139.178.89.65 port 33190 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.324621 sshd[1370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.332639 systemd-logind[1272]: New session 2 of user core. Jun 25 16:27:51.338365 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:27:51.406536 sshd[1370]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:51.415832 systemd[1]: sshd@1-164.92.91.188:22-139.178.89.65:33190.service: Deactivated successfully. Jun 25 16:27:51.417044 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:27:51.417763 systemd-logind[1272]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:27:51.426744 systemd[1]: Started sshd@2-164.92.91.188:22-139.178.89.65:33198.service - OpenSSH per-connection server daemon (139.178.89.65:33198). Jun 25 16:27:51.429495 systemd-logind[1272]: Removed session 2. Jun 25 16:27:51.473903 sshd[1376]: Accepted publickey for core from 139.178.89.65 port 33198 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.476859 sshd[1376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.483307 systemd-logind[1272]: New session 3 of user core. Jun 25 16:27:51.486220 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:27:51.549091 sshd[1376]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:51.557908 systemd[1]: sshd@2-164.92.91.188:22-139.178.89.65:33198.service: Deactivated successfully. Jun 25 16:27:51.558910 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:27:51.560001 systemd-logind[1272]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:27:51.566719 systemd[1]: Started sshd@3-164.92.91.188:22-139.178.89.65:33206.service - OpenSSH per-connection server daemon (139.178.89.65:33206). Jun 25 16:27:51.568364 systemd-logind[1272]: Removed session 3. Jun 25 16:27:51.609543 sshd[1382]: Accepted publickey for core from 139.178.89.65 port 33206 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.612401 sshd[1382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.619631 systemd-logind[1272]: New session 4 of user core. Jun 25 16:27:51.622322 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:27:51.690732 sshd[1382]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:51.700975 systemd[1]: sshd@3-164.92.91.188:22-139.178.89.65:33206.service: Deactivated successfully. Jun 25 16:27:51.702281 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:27:51.703250 systemd-logind[1272]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:27:51.709823 systemd[1]: Started sshd@4-164.92.91.188:22-139.178.89.65:33210.service - OpenSSH per-connection server daemon (139.178.89.65:33210). Jun 25 16:27:51.712242 systemd-logind[1272]: Removed session 4. Jun 25 16:27:51.756294 sshd[1388]: Accepted publickey for core from 139.178.89.65 port 33210 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.757092 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.764591 systemd-logind[1272]: New session 5 of user core. Jun 25 16:27:51.771423 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:27:51.892045 sudo[1391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:27:51.892462 sudo[1391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:51.917154 sudo[1391]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:51.924463 sshd[1388]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:51.933644 systemd[1]: sshd@4-164.92.91.188:22-139.178.89.65:33210.service: Deactivated successfully. Jun 25 16:27:51.934637 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:27:51.936300 systemd-logind[1272]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:27:51.945117 systemd[1]: Started sshd@5-164.92.91.188:22-139.178.89.65:33216.service - OpenSSH per-connection server daemon (139.178.89.65:33216). Jun 25 16:27:51.947756 systemd-logind[1272]: Removed session 5. Jun 25 16:27:51.984033 sshd[1395]: Accepted publickey for core from 139.178.89.65 port 33216 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:51.986065 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:51.992829 systemd-logind[1272]: New session 6 of user core. Jun 25 16:27:52.001478 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:27:52.069591 sudo[1399]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:27:52.070039 sudo[1399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:52.075718 sudo[1399]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:52.104995 sudo[1398]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:27:52.106020 sudo[1398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:52.133951 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:52.134000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:27:52.136300 kernel: kauditd_printk_skb: 37 callbacks suppressed Jun 25 16:27:52.136371 kernel: audit: type=1305 audit(1719332872.134:218): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:27:52.136841 auditctl[1402]: No rules Jun 25 16:27:52.140254 kernel: audit: type=1300 audit(1719332872.134:218): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2233ba80 a2=420 a3=0 items=0 ppid=1 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.134000 audit[1402]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2233ba80 a2=420 a3=0 items=0 ppid=1 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.134000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:27:52.137698 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:27:52.137964 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:52.141762 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:27:52.142036 kernel: audit: type=1327 audit(1719332872.134:218): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:27:52.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.145955 kernel: audit: type=1131 audit(1719332872.137:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.180888 augenrules[1419]: No rules Jun 25 16:27:52.182293 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:27:52.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.185966 kernel: audit: type=1130 audit(1719332872.181:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.185728 sudo[1398]: pam_unix(sudo:session): session closed for user root Jun 25 16:27:52.184000 audit[1398]: USER_END pid=1398 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.185000 audit[1398]: CRED_DISP pid=1398 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.191722 kernel: audit: type=1106 audit(1719332872.184:221): pid=1398 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.191879 kernel: audit: type=1104 audit(1719332872.185:222): pid=1398 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.193290 sshd[1395]: pam_unix(sshd:session): session closed for user core Jun 25 16:27:52.194000 audit[1395]: USER_END pid=1395 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.200024 kernel: audit: type=1106 audit(1719332872.194:223): pid=1395 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.195000 audit[1395]: CRED_DISP pid=1395 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.203163 kernel: audit: type=1104 audit(1719332872.195:224): pid=1395 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.205194 systemd[1]: sshd@5-164.92.91.188:22-139.178.89.65:33216.service: Deactivated successfully. Jun 25 16:27:52.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.92.91.188:22-139.178.89.65:33216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.206220 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:27:52.208979 kernel: audit: type=1131 audit(1719332872.204:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-164.92.91.188:22-139.178.89.65:33216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.211228 systemd-logind[1272]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:27:52.215884 systemd[1]: Started sshd@6-164.92.91.188:22-139.178.89.65:33228.service - OpenSSH per-connection server daemon (139.178.89.65:33228). Jun 25 16:27:52.218910 systemd-logind[1272]: Removed session 6. Jun 25 16:27:52.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.91.188:22-139.178.89.65:33228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.265000 audit[1425]: USER_ACCT pid=1425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.266513 sshd[1425]: Accepted publickey for core from 139.178.89.65 port 33228 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:27:52.267000 audit[1425]: CRED_ACQ pid=1425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.267000 audit[1425]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4aba4470 a2=3 a3=7fabd0fbb480 items=0 ppid=1 pid=1425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:52.267000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:27:52.268819 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:27:52.276641 systemd-logind[1272]: New session 7 of user core. Jun 25 16:27:52.281362 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:27:52.286000 audit[1425]: USER_START pid=1425 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.288000 audit[1428]: CRED_ACQ pid=1428 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:27:52.343000 audit[1429]: USER_ACCT pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.344791 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:27:52.344000 audit[1429]: CRED_REFR pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.345714 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:27:52.347000 audit[1429]: USER_START pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:27:52.533916 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:27:53.084447 dockerd[1438]: time="2024-06-25T16:27:53.084364457Z" level=info msg="Starting up" Jun 25 16:27:53.143612 systemd[1]: var-lib-docker-metacopy\x2dcheck2123805293-merged.mount: Deactivated successfully. Jun 25 16:27:53.171562 dockerd[1438]: time="2024-06-25T16:27:53.171423240Z" level=info msg="Loading containers: start." Jun 25 16:27:53.270000 audit[1470]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.270000 audit[1470]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe83fbbde0 a2=0 a3=7f3162480e90 items=0 ppid=1438 pid=1470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.270000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:27:53.274000 audit[1472]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.274000 audit[1472]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc8a432110 a2=0 a3=7f4b7df49e90 items=0 ppid=1438 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:27:53.277000 audit[1474]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.277000 audit[1474]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcbf9062d0 a2=0 a3=7ff3d8895e90 items=0 ppid=1438 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.277000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:27:53.282000 audit[1476]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.282000 audit[1476]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeee0bf720 a2=0 a3=7f1731ecee90 items=0 ppid=1438 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:27:53.286000 audit[1478]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.286000 audit[1478]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff5c7c4f60 a2=0 a3=7f9c728e5e90 items=0 ppid=1438 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.286000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:27:53.290000 audit[1480]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.290000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd10c22ce0 a2=0 a3=7f8dd1adee90 items=0 ppid=1438 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.290000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:27:53.312000 audit[1482]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.312000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffa87e700 a2=0 a3=7f49f1c87e90 items=0 ppid=1438 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:27:53.316000 audit[1484]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.316000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc43b65b70 a2=0 a3=7f36f9569e90 items=0 ppid=1438 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:27:53.319000 audit[1486]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.319000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd0c881750 a2=0 a3=7fa93f927e90 items=0 ppid=1438 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.319000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:53.333000 audit[1490]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.333000 audit[1490]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd5bc691d0 a2=0 a3=7f0ecadc9e90 items=0 ppid=1438 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.333000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:53.336000 audit[1491]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.336000 audit[1491]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffdddd0720 a2=0 a3=7f4f05d0ee90 items=0 ppid=1438 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.336000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:53.350991 kernel: Initializing XFRM netlink socket Jun 25 16:27:53.411000 audit[1500]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.411000 audit[1500]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe2aee23b0 a2=0 a3=7f2a65ac7e90 items=0 ppid=1438 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.411000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:27:53.429000 audit[1503]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.429000 audit[1503]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc215e0ef0 a2=0 a3=7f2c29a24e90 items=0 ppid=1438 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.429000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:27:53.437000 audit[1507]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.437000 audit[1507]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffedd167650 a2=0 a3=7efe4400ce90 items=0 ppid=1438 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:27:53.441000 audit[1509]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.441000 audit[1509]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd52f37850 a2=0 a3=7fed72ed2e90 items=0 ppid=1438 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.441000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:27:53.446000 audit[1511]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.446000 audit[1511]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc4ce42b30 a2=0 a3=7fe2566a2e90 items=0 ppid=1438 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.446000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:27:53.450000 audit[1513]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.450000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffc42e7e30 a2=0 a3=7f68c2579e90 items=0 ppid=1438 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.450000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:27:53.455000 audit[1515]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.455000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffab1ffde0 a2=0 a3=7fd9e8afce90 items=0 ppid=1438 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.455000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:27:53.466000 audit[1518]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.466000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcebf6a9e0 a2=0 a3=7fdb0626de90 items=0 ppid=1438 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:27:53.471000 audit[1520]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.471000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdcc163e20 a2=0 a3=7f276e91ce90 items=0 ppid=1438 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.471000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:27:53.476000 audit[1522]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.476000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeded18470 a2=0 a3=7fc4e3052e90 items=0 ppid=1438 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.476000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:27:53.480000 audit[1524]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.480000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe5e199ce0 a2=0 a3=7fbb08194e90 items=0 ppid=1438 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.480000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:27:53.482537 systemd-networkd[1093]: docker0: Link UP Jun 25 16:27:53.496000 audit[1528]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.496000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe54251750 a2=0 a3=7f3f8e23ae90 items=0 ppid=1438 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.496000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:53.498000 audit[1529]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:27:53.498000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdcc267bd0 a2=0 a3=7f30118e8e90 items=0 ppid=1438 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:27:53.498000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:27:53.500610 dockerd[1438]: time="2024-06-25T16:27:53.500534362Z" level=info msg="Loading containers: done." Jun 25 16:27:53.611675 dockerd[1438]: time="2024-06-25T16:27:53.611547441Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:27:53.612692 dockerd[1438]: time="2024-06-25T16:27:53.612614047Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:27:53.613084 dockerd[1438]: time="2024-06-25T16:27:53.613063836Z" level=info msg="Daemon has completed initialization" Jun 25 16:27:53.666842 dockerd[1438]: time="2024-06-25T16:27:53.666763893Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:27:53.672468 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:27:53.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:54.126473 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2408259574-merged.mount: Deactivated successfully. Jun 25 16:27:54.817078 containerd[1279]: time="2024-06-25T16:27:54.817010483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:27:55.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:55.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:55.540237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:27:55.540489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:55.540592 systemd[1]: kubelet.service: Consumed 1.534s CPU time. Jun 25 16:27:55.552751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:27:55.597051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336267094.mount: Deactivated successfully. Jun 25 16:27:55.730876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:27:55.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:27:55.845499 kubelet[1589]: E0625 16:27:55.845333 1589 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:27:55.849475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:27:55.849631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:27:55.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:27:57.781717 containerd[1279]: time="2024-06-25T16:27:57.781641495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.782990 containerd[1279]: time="2024-06-25T16:27:57.782889821Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:27:57.783658 containerd[1279]: time="2024-06-25T16:27:57.783625630Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.786255 containerd[1279]: time="2024-06-25T16:27:57.786200193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.789245 containerd[1279]: time="2024-06-25T16:27:57.789181750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:27:57.791052 containerd[1279]: time="2024-06-25T16:27:57.790981466Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.973902965s" Jun 25 16:27:57.791052 containerd[1279]: time="2024-06-25T16:27:57.791050189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:27:57.823773 containerd[1279]: time="2024-06-25T16:27:57.823725232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:28:00.305036 containerd[1279]: time="2024-06-25T16:28:00.304800474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.306553 containerd[1279]: time="2024-06-25T16:28:00.306464733Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:28:00.310046 containerd[1279]: time="2024-06-25T16:28:00.309966283Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.321228 containerd[1279]: time="2024-06-25T16:28:00.321162123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.323351 containerd[1279]: time="2024-06-25T16:28:00.323298933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:00.327060 containerd[1279]: time="2024-06-25T16:28:00.326803284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.502708042s" Jun 25 16:28:00.327060 containerd[1279]: time="2024-06-25T16:28:00.327003465Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:28:00.366856 containerd[1279]: time="2024-06-25T16:28:00.366800898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:28:02.055592 containerd[1279]: time="2024-06-25T16:28:02.055498718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:02.058063 containerd[1279]: time="2024-06-25T16:28:02.057967411Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:28:02.060395 containerd[1279]: time="2024-06-25T16:28:02.060317932Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:02.065290 containerd[1279]: time="2024-06-25T16:28:02.065223460Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:02.068895 containerd[1279]: time="2024-06-25T16:28:02.068823454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:02.071499 containerd[1279]: time="2024-06-25T16:28:02.071404226Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.704283189s" Jun 25 16:28:02.071499 containerd[1279]: time="2024-06-25T16:28:02.071493538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:28:02.109810 containerd[1279]: time="2024-06-25T16:28:02.109750435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:28:03.750682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298746981.mount: Deactivated successfully. Jun 25 16:28:04.401985 containerd[1279]: time="2024-06-25T16:28:04.401897153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.403295 containerd[1279]: time="2024-06-25T16:28:04.403217824Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:28:04.403905 containerd[1279]: time="2024-06-25T16:28:04.403867828Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.405677 containerd[1279]: time="2024-06-25T16:28:04.405629288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.407731 containerd[1279]: time="2024-06-25T16:28:04.407685162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:04.408980 containerd[1279]: time="2024-06-25T16:28:04.408880357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.298788405s" Jun 25 16:28:04.409189 containerd[1279]: time="2024-06-25T16:28:04.409160125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:28:04.440305 containerd[1279]: time="2024-06-25T16:28:04.440209448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:28:05.016194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057335331.mount: Deactivated successfully. Jun 25 16:28:05.024804 containerd[1279]: time="2024-06-25T16:28:05.024711824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:05.026145 containerd[1279]: time="2024-06-25T16:28:05.026069483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:28:05.027043 containerd[1279]: time="2024-06-25T16:28:05.027002134Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:05.029772 containerd[1279]: time="2024-06-25T16:28:05.029717298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:05.032488 containerd[1279]: time="2024-06-25T16:28:05.032427607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:05.034130 containerd[1279]: time="2024-06-25T16:28:05.034067967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 593.78051ms" Jun 25 16:28:05.034354 containerd[1279]: time="2024-06-25T16:28:05.034326115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:28:05.077983 containerd[1279]: time="2024-06-25T16:28:05.077884845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:28:05.729309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303533957.mount: Deactivated successfully. Jun 25 16:28:06.107124 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:28:06.107280 kernel: audit: type=1130 audit(1719332886.099:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.107308 kernel: audit: type=1131 audit(1719332886.099:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.100632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:28:06.100873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:06.108623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:06.372959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:06.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.376048 kernel: audit: type=1130 audit(1719332886.372:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:06.525807 kubelet[1706]: E0625 16:28:06.525740 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:28:06.529233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:28:06.529442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:28:06.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:28:06.532988 kernel: audit: type=1131 audit(1719332886.528:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:28:08.535093 containerd[1279]: time="2024-06-25T16:28:08.535014422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:08.536983 containerd[1279]: time="2024-06-25T16:28:08.536876078Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:28:08.537520 containerd[1279]: time="2024-06-25T16:28:08.537489490Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:08.540837 containerd[1279]: time="2024-06-25T16:28:08.540784842Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:08.544145 containerd[1279]: time="2024-06-25T16:28:08.544070159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:08.546137 containerd[1279]: time="2024-06-25T16:28:08.546060154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.467803984s" Jun 25 16:28:08.546137 containerd[1279]: time="2024-06-25T16:28:08.546140340Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:28:08.579288 containerd[1279]: time="2024-06-25T16:28:08.579223882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:28:09.272191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653363000.mount: Deactivated successfully. Jun 25 16:28:09.948023 containerd[1279]: time="2024-06-25T16:28:09.947907153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:09.951540 containerd[1279]: time="2024-06-25T16:28:09.951450519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:28:09.957096 containerd[1279]: time="2024-06-25T16:28:09.957014566Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:09.962735 containerd[1279]: time="2024-06-25T16:28:09.962665551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:09.965108 containerd[1279]: time="2024-06-25T16:28:09.965037521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:09.966568 containerd[1279]: time="2024-06-25T16:28:09.966485677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.386968031s" Jun 25 16:28:09.966568 containerd[1279]: time="2024-06-25T16:28:09.966562278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:28:13.420087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:13.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.425505 kernel: audit: type=1130 audit(1719332893.420:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.425653 kernel: audit: type=1131 audit(1719332893.420:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.432755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:13.461341 systemd[1]: Reloading. Jun 25 16:28:13.787868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:13.873000 audit: BPF prog-id=44 op=LOAD Jun 25 16:28:13.874000 audit: BPF prog-id=45 op=LOAD Jun 25 16:28:13.876069 kernel: audit: type=1334 audit(1719332893.873:270): prog-id=44 op=LOAD Jun 25 16:28:13.876164 kernel: audit: type=1334 audit(1719332893.874:271): prog-id=45 op=LOAD Jun 25 16:28:13.876201 kernel: audit: type=1334 audit(1719332893.874:272): prog-id=30 op=UNLOAD Jun 25 16:28:13.874000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:28:13.874000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:28:13.877465 kernel: audit: type=1334 audit(1719332893.874:273): prog-id=31 op=UNLOAD Jun 25 16:28:13.878000 audit: BPF prog-id=46 op=LOAD Jun 25 16:28:13.878000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:28:13.880818 kernel: audit: type=1334 audit(1719332893.878:274): prog-id=46 op=LOAD Jun 25 16:28:13.881006 kernel: audit: type=1334 audit(1719332893.878:275): prog-id=41 op=UNLOAD Jun 25 16:28:13.881064 kernel: audit: type=1334 audit(1719332893.878:276): prog-id=47 op=LOAD Jun 25 16:28:13.878000 audit: BPF prog-id=47 op=LOAD Jun 25 16:28:13.878000 audit: BPF prog-id=48 op=LOAD Jun 25 16:28:13.882262 kernel: audit: type=1334 audit(1719332893.878:277): prog-id=48 op=LOAD Jun 25 16:28:13.878000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:28:13.878000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:28:13.880000 audit: BPF prog-id=49 op=LOAD Jun 25 16:28:13.880000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:28:13.882000 audit: BPF prog-id=50 op=LOAD Jun 25 16:28:13.883000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:28:13.883000 audit: BPF prog-id=51 op=LOAD Jun 25 16:28:13.883000 audit: BPF prog-id=52 op=LOAD Jun 25 16:28:13.883000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:28:13.883000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:28:13.884000 audit: BPF prog-id=53 op=LOAD Jun 25 16:28:13.884000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:28:13.887000 audit: BPF prog-id=54 op=LOAD Jun 25 16:28:13.887000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:28:13.889000 audit: BPF prog-id=55 op=LOAD Jun 25 16:28:13.889000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:28:13.889000 audit: BPF prog-id=56 op=LOAD Jun 25 16:28:13.889000 audit: BPF prog-id=57 op=LOAD Jun 25 16:28:13.889000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:28:13.889000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:28:13.912252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:13.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.920401 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:13.922800 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:28:13.923431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:13.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:13.930433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:14.089708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:14.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:14.188093 kubelet[1892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:14.188736 kubelet[1892]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:28:14.188844 kubelet[1892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:14.189187 kubelet[1892]: I0625 16:28:14.189119 1892 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:28:14.795548 kubelet[1892]: I0625 16:28:14.795472 1892 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:28:14.795548 kubelet[1892]: I0625 16:28:14.795524 1892 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:28:14.795880 kubelet[1892]: I0625 16:28:14.795852 1892 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:28:14.820126 kubelet[1892]: I0625 16:28:14.820075 1892 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:14.823555 kubelet[1892]: E0625 16:28:14.823506 1892 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.91.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.838424 kubelet[1892]: I0625 16:28:14.838210 1892 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:28:14.838622 kubelet[1892]: I0625 16:28:14.838596 1892 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:28:14.839014 kubelet[1892]: I0625 16:28:14.838931 1892 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:28:14.839014 kubelet[1892]: I0625 16:28:14.839020 1892 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:28:14.839260 kubelet[1892]: I0625 16:28:14.839042 1892 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:28:14.840089 kubelet[1892]: I0625 16:28:14.840041 1892 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:14.842243 kubelet[1892]: I0625 16:28:14.841921 1892 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:28:14.842243 kubelet[1892]: I0625 16:28:14.842224 1892 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:28:14.842243 kubelet[1892]: I0625 16:28:14.842260 1892 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:28:14.842522 kubelet[1892]: I0625 16:28:14.842281 1892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:28:14.843903 kubelet[1892]: W0625 16:28:14.843428 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.92.91.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-1561673ea7&limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.843903 kubelet[1892]: E0625 16:28:14.843486 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.91.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-1561673ea7&limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.851315 kubelet[1892]: I0625 16:28:14.851255 1892 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:28:14.854814 kubelet[1892]: W0625 16:28:14.854761 1892 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:28:14.858580 kubelet[1892]: I0625 16:28:14.858509 1892 server.go:1232] "Started kubelet" Jun 25 16:28:14.860340 kubelet[1892]: W0625 16:28:14.860182 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.92.91.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.860621 kubelet[1892]: E0625 16:28:14.860601 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.91.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.861056 kubelet[1892]: I0625 16:28:14.861022 1892 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:28:14.865178 kubelet[1892]: I0625 16:28:14.865128 1892 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:28:14.865628 kubelet[1892]: I0625 16:28:14.865602 1892 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:28:14.866718 kubelet[1892]: I0625 16:28:14.866693 1892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:28:14.867086 kubelet[1892]: I0625 16:28:14.867061 1892 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:28:14.869316 kubelet[1892]: E0625 16:28:14.869122 1892 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-1561673ea7.17dc4c2890aec9a2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-1561673ea7", UID:"ci-3815.2.4-a-1561673ea7", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-1561673ea7"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 28, 14, 858463650, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 28, 14, 858463650, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-1561673ea7"}': 'Post "https://164.92.91.188:6443/api/v1/namespaces/default/events": dial tcp 164.92.91.188:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:28:14.870821 kubelet[1892]: E0625 16:28:14.870741 1892 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:28:14.870821 kubelet[1892]: E0625 16:28:14.870798 1892 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:28:14.877806 kubelet[1892]: E0625 16:28:14.877767 1892 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3815.2.4-a-1561673ea7\" not found" Jun 25 16:28:14.877806 kubelet[1892]: I0625 16:28:14.877822 1892 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:28:14.878080 kubelet[1892]: I0625 16:28:14.877970 1892 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:28:14.878080 kubelet[1892]: I0625 16:28:14.878073 1892 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:28:14.879069 kubelet[1892]: W0625 16:28:14.878718 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.92.91.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.879216 kubelet[1892]: E0625 16:28:14.879103 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.91.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.879334 kubelet[1892]: E0625 16:28:14.879305 1892 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.91.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-1561673ea7?timeout=10s\": dial tcp 164.92.91.188:6443: connect: connection refused" interval="200ms" Jun 25 16:28:14.895000 audit[1905]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.895000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb67c5eb0 a2=0 a3=7fe62e069e90 items=0 ppid=1892 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:28:14.897000 audit[1906]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.897000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6ad77270 a2=0 a3=7f5230190e90 items=0 ppid=1892 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:28:14.902000 audit[1908]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.902000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd951a2090 a2=0 a3=7f9e5bdc9e90 items=0 ppid=1892 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.902000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:14.906000 audit[1910]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.906000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7e0b6560 a2=0 a3=7f0cf4b0ee90 items=0 ppid=1892 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:14.927149 kubelet[1892]: I0625 16:28:14.927117 1892 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:28:14.927149 kubelet[1892]: I0625 16:28:14.927145 1892 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:28:14.927149 kubelet[1892]: I0625 16:28:14.927164 1892 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:14.929000 audit[1915]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.929000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc81f05840 a2=0 a3=7f6b9c9fee90 items=0 ppid=1892 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:28:14.932071 kubelet[1892]: I0625 16:28:14.932025 1892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:28:14.933434 kubelet[1892]: I0625 16:28:14.933400 1892 policy_none.go:49] "None policy: Start" Jun 25 16:28:14.934691 kubelet[1892]: I0625 16:28:14.934660 1892 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:28:14.934829 kubelet[1892]: I0625 16:28:14.934709 1892 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:28:14.935000 audit[1916]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:14.938000 audit[1917]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.938000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe31ceba80 a2=0 a3=4 items=0 ppid=1892 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:28:14.935000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc90be0af0 a2=0 a3=7fda5d0afe90 items=0 ppid=1892 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:28:14.944171 kubelet[1892]: I0625 16:28:14.942613 1892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:28:14.944171 kubelet[1892]: I0625 16:28:14.942655 1892 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:28:14.950545 kubelet[1892]: I0625 16:28:14.950490 1892 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:28:14.950793 kubelet[1892]: E0625 16:28:14.950612 1892 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:28:14.955667 kubelet[1892]: W0625 16:28:14.955597 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.92.91.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.956058 kubelet[1892]: E0625 16:28:14.956037 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.91.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:14.959516 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:28:14.955000 audit[1918]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.955000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe47237bd0 a2=0 a3=7f75e7df8e90 items=0 ppid=1892 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:28:14.964000 audit[1919]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:14.964000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff27321550 a2=0 a3=7efff5431e90 items=0 ppid=1892 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.964000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:28:14.965000 audit[1920]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:14.965000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfe4c7200 a2=0 a3=7f2bd6343e90 items=0 ppid=1892 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:28:14.972049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:28:14.980000 audit[1921]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:14.980000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc7c7ca6c0 a2=0 a3=7fe1ac412e90 items=0 ppid=1892 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:28:14.980000 audit[1922]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:14.980000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd1b7b3c70 a2=0 a3=7f6eddba4e90 items=0 ppid=1892 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:14.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:28:14.993982 kubelet[1892]: I0625 16:28:14.993901 1892 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:14.994872 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:28:14.999489 kubelet[1892]: E0625 16:28:14.998117 1892 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.92.91.188:6443/api/v1/nodes\": dial tcp 164.92.91.188:6443: connect: connection refused" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.005826 kubelet[1892]: I0625 16:28:15.005786 1892 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:28:15.006930 kubelet[1892]: I0625 16:28:15.006898 1892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:28:15.009405 kubelet[1892]: E0625 16:28:15.009375 1892 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-1561673ea7\" not found" Jun 25 16:28:15.051549 kubelet[1892]: I0625 16:28:15.051364 1892 topology_manager.go:215] "Topology Admit Handler" podUID="9fd729b90fdd5329c3262eb9d9e611e3" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.054713 kubelet[1892]: I0625 16:28:15.054674 1892 topology_manager.go:215] "Topology Admit Handler" podUID="1624108e96d1288f4f3743cdf9714df2" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.055857 kubelet[1892]: I0625 16:28:15.055814 1892 topology_manager.go:215] "Topology Admit Handler" podUID="231b9f8260fd834d12abd7192e5e5595" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.065860 systemd[1]: Created slice kubepods-burstable-pod9fd729b90fdd5329c3262eb9d9e611e3.slice - libcontainer container kubepods-burstable-pod9fd729b90fdd5329c3262eb9d9e611e3.slice. Jun 25 16:28:15.081132 kubelet[1892]: E0625 16:28:15.081087 1892 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.91.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-1561673ea7?timeout=10s\": dial tcp 164.92.91.188:6443: connect: connection refused" interval="400ms" Jun 25 16:28:15.082884 systemd[1]: Created slice kubepods-burstable-pod1624108e96d1288f4f3743cdf9714df2.slice - libcontainer container kubepods-burstable-pod1624108e96d1288f4f3743cdf9714df2.slice. Jun 25 16:28:15.085847 kubelet[1892]: I0625 16:28:15.085809 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086105 kubelet[1892]: I0625 16:28:15.085872 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086105 kubelet[1892]: I0625 16:28:15.085908 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086192 kubelet[1892]: I0625 16:28:15.085968 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086192 kubelet[1892]: I0625 16:28:15.086182 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086249 kubelet[1892]: I0625 16:28:15.086220 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086290 kubelet[1892]: I0625 16:28:15.086253 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1624108e96d1288f4f3743cdf9714df2-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-1561673ea7\" (UID: \"1624108e96d1288f4f3743cdf9714df2\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086290 kubelet[1892]: I0625 16:28:15.086286 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086365 kubelet[1892]: I0625 16:28:15.086322 1892 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.086975 systemd[1]: Created slice kubepods-burstable-pod231b9f8260fd834d12abd7192e5e5595.slice - libcontainer container kubepods-burstable-pod231b9f8260fd834d12abd7192e5e5595.slice. Jun 25 16:28:15.200314 kubelet[1892]: I0625 16:28:15.200174 1892 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.200851 kubelet[1892]: E0625 16:28:15.200818 1892 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.92.91.188:6443/api/v1/nodes\": dial tcp 164.92.91.188:6443: connect: connection refused" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.377531 kubelet[1892]: E0625 16:28:15.377336 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:15.379681 containerd[1279]: time="2024-06-25T16:28:15.379068937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-1561673ea7,Uid:9fd729b90fdd5329c3262eb9d9e611e3,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:15.389442 kubelet[1892]: E0625 16:28:15.389396 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:15.390554 kubelet[1892]: E0625 16:28:15.390418 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:15.399435 containerd[1279]: time="2024-06-25T16:28:15.399368081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-1561673ea7,Uid:231b9f8260fd834d12abd7192e5e5595,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:15.400408 containerd[1279]: time="2024-06-25T16:28:15.399775155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-1561673ea7,Uid:1624108e96d1288f4f3743cdf9714df2,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:15.482156 kubelet[1892]: E0625 16:28:15.482113 1892 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.91.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-1561673ea7?timeout=10s\": dial tcp 164.92.91.188:6443: connect: connection refused" interval="800ms" Jun 25 16:28:15.602746 kubelet[1892]: I0625 16:28:15.602693 1892 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.603102 kubelet[1892]: E0625 16:28:15.603084 1892 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.92.91.188:6443/api/v1/nodes\": dial tcp 164.92.91.188:6443: connect: connection refused" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:15.965215 kubelet[1892]: W0625 16:28:15.965115 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://164.92.91.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:15.965215 kubelet[1892]: E0625 16:28:15.965212 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.91.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.075295 kubelet[1892]: W0625 16:28:16.075155 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://164.92.91.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.075295 kubelet[1892]: E0625 16:28:16.075255 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.91.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.125573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880423423.mount: Deactivated successfully. Jun 25 16:28:16.137031 containerd[1279]: time="2024-06-25T16:28:16.136914697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.138621 containerd[1279]: time="2024-06-25T16:28:16.138517258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:28:16.139918 containerd[1279]: time="2024-06-25T16:28:16.139847993Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.142242 containerd[1279]: time="2024-06-25T16:28:16.142162380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:28:16.142582 containerd[1279]: time="2024-06-25T16:28:16.142548107Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.144533 containerd[1279]: time="2024-06-25T16:28:16.144458681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:28:16.146044 containerd[1279]: time="2024-06-25T16:28:16.145998780Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.149580 containerd[1279]: time="2024-06-25T16:28:16.149492176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 770.214112ms" Jun 25 16:28:16.165268 containerd[1279]: time="2024-06-25T16:28:16.165160973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.170185 containerd[1279]: time="2024-06-25T16:28:16.166857657Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.170185 containerd[1279]: time="2024-06-25T16:28:16.167985399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.170185 containerd[1279]: time="2024-06-25T16:28:16.169110197Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.170185 containerd[1279]: time="2024-06-25T16:28:16.170166307Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.171360 containerd[1279]: time="2024-06-25T16:28:16.171299894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.173905 containerd[1279]: time="2024-06-25T16:28:16.173801718Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 773.2014ms" Jun 25 16:28:16.174276 containerd[1279]: time="2024-06-25T16:28:16.174227241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.194265 containerd[1279]: time="2024-06-25T16:28:16.194041198Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:28:16.196981 containerd[1279]: time="2024-06-25T16:28:16.196892033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 797.013303ms" Jun 25 16:28:16.220596 kubelet[1892]: W0625 16:28:16.219587 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://164.92.91.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-1561673ea7&limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.220596 kubelet[1892]: E0625 16:28:16.219680 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.91.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-1561673ea7&limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.282991 kubelet[1892]: E0625 16:28:16.282866 1892 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.91.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-1561673ea7?timeout=10s\": dial tcp 164.92.91.188:6443: connect: connection refused" interval="1.6s" Jun 25 16:28:16.349359 kubelet[1892]: W0625 16:28:16.348719 1892 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://164.92.91.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.349359 kubelet[1892]: E0625 16:28:16.348803 1892 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.91.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.367580 containerd[1279]: time="2024-06-25T16:28:16.365526812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:16.367580 containerd[1279]: time="2024-06-25T16:28:16.365601023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.367580 containerd[1279]: time="2024-06-25T16:28:16.365623484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:16.367580 containerd[1279]: time="2024-06-25T16:28:16.365645745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.374924 containerd[1279]: time="2024-06-25T16:28:16.374524586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:16.374924 containerd[1279]: time="2024-06-25T16:28:16.374621217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.374924 containerd[1279]: time="2024-06-25T16:28:16.374662545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:16.374924 containerd[1279]: time="2024-06-25T16:28:16.374681168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.380493 containerd[1279]: time="2024-06-25T16:28:16.380272386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:16.381302 containerd[1279]: time="2024-06-25T16:28:16.381218962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.381531 containerd[1279]: time="2024-06-25T16:28:16.381451208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:16.381736 containerd[1279]: time="2024-06-25T16:28:16.381686676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:16.395461 systemd[1]: Started cri-containerd-ef899241d164c2c6070b2c209bfa2cc0287831db1135ec427328ab2da65d99ce.scope - libcontainer container ef899241d164c2c6070b2c209bfa2cc0287831db1135ec427328ab2da65d99ce. Jun 25 16:28:16.415478 kubelet[1892]: I0625 16:28:16.415410 1892 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:16.416130 kubelet[1892]: E0625 16:28:16.416041 1892 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://164.92.91.188:6443/api/v1/nodes\": dial tcp 164.92.91.188:6443: connect: connection refused" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:16.421000 audit: BPF prog-id=58 op=LOAD Jun 25 16:28:16.421000 audit: BPF prog-id=59 op=LOAD Jun 25 16:28:16.421000 audit[1970]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=1948 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383939323431643136346332633630373062326332303962666132 Jun 25 16:28:16.422000 audit: BPF prog-id=60 op=LOAD Jun 25 16:28:16.422000 audit[1970]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=1948 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383939323431643136346332633630373062326332303962666132 Jun 25 16:28:16.423000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:28:16.423000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:28:16.423000 audit: BPF prog-id=61 op=LOAD Jun 25 16:28:16.423000 audit[1970]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=1948 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383939323431643136346332633630373062326332303962666132 Jun 25 16:28:16.435277 systemd[1]: Started cri-containerd-bcff9a0d732cf4ee484154d02b4dc14d22ea86322d2fb922ebee0e616c8fdd78.scope - libcontainer container bcff9a0d732cf4ee484154d02b4dc14d22ea86322d2fb922ebee0e616c8fdd78. Jun 25 16:28:16.451275 systemd[1]: Started cri-containerd-6531e7e7df7803a4fec658c3f2aad4fbc338ea652c5e8e339217b15cd84127ec.scope - libcontainer container 6531e7e7df7803a4fec658c3f2aad4fbc338ea652c5e8e339217b15cd84127ec. Jun 25 16:28:16.458000 audit: BPF prog-id=62 op=LOAD Jun 25 16:28:16.459000 audit: BPF prog-id=63 op=LOAD Jun 25 16:28:16.459000 audit[1988]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1957 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263666639613064373332636634656534383431353464303262346463 Jun 25 16:28:16.459000 audit: BPF prog-id=64 op=LOAD Jun 25 16:28:16.459000 audit[1988]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1957 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263666639613064373332636634656534383431353464303262346463 Jun 25 16:28:16.459000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:28:16.459000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:28:16.459000 audit: BPF prog-id=65 op=LOAD Jun 25 16:28:16.459000 audit[1988]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=1957 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263666639613064373332636634656534383431353464303262346463 Jun 25 16:28:16.481000 audit: BPF prog-id=66 op=LOAD Jun 25 16:28:16.482000 audit: BPF prog-id=67 op=LOAD Jun 25 16:28:16.482000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=1944 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333165376537646637383033613466656336353863336632616164 Jun 25 16:28:16.482000 audit: BPF prog-id=68 op=LOAD Jun 25 16:28:16.482000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=1944 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.482000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333165376537646637383033613466656336353863336632616164 Jun 25 16:28:16.482000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:28:16.482000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:28:16.483000 audit: BPF prog-id=69 op=LOAD Jun 25 16:28:16.483000 audit[1990]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=1944 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635333165376537646637383033613466656336353863336632616164 Jun 25 16:28:16.506261 containerd[1279]: time="2024-06-25T16:28:16.506198983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-1561673ea7,Uid:9fd729b90fdd5329c3262eb9d9e611e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef899241d164c2c6070b2c209bfa2cc0287831db1135ec427328ab2da65d99ce\"" Jun 25 16:28:16.514239 kubelet[1892]: E0625 16:28:16.514192 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.519683 containerd[1279]: time="2024-06-25T16:28:16.519599583Z" level=info msg="CreateContainer within sandbox \"ef899241d164c2c6070b2c209bfa2cc0287831db1135ec427328ab2da65d99ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:28:16.551748 containerd[1279]: time="2024-06-25T16:28:16.551682215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-1561673ea7,Uid:1624108e96d1288f4f3743cdf9714df2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcff9a0d732cf4ee484154d02b4dc14d22ea86322d2fb922ebee0e616c8fdd78\"" Jun 25 16:28:16.553115 kubelet[1892]: E0625 16:28:16.553078 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.556125 containerd[1279]: time="2024-06-25T16:28:16.556045760Z" level=info msg="CreateContainer within sandbox \"bcff9a0d732cf4ee484154d02b4dc14d22ea86322d2fb922ebee0e616c8fdd78\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:28:16.560901 containerd[1279]: time="2024-06-25T16:28:16.560815203Z" level=info msg="CreateContainer within sandbox \"ef899241d164c2c6070b2c209bfa2cc0287831db1135ec427328ab2da65d99ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b8974b6a088d339026d94ac1c38ba1a1d6c6ab3a369123260cddd3504360ab4\"" Jun 25 16:28:16.562117 containerd[1279]: time="2024-06-25T16:28:16.562036873Z" level=info msg="StartContainer for \"1b8974b6a088d339026d94ac1c38ba1a1d6c6ab3a369123260cddd3504360ab4\"" Jun 25 16:28:16.564299 containerd[1279]: time="2024-06-25T16:28:16.564234828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-1561673ea7,Uid:231b9f8260fd834d12abd7192e5e5595,Namespace:kube-system,Attempt:0,} returns sandbox id \"6531e7e7df7803a4fec658c3f2aad4fbc338ea652c5e8e339217b15cd84127ec\"" Jun 25 16:28:16.565598 kubelet[1892]: E0625 16:28:16.565542 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.570113 containerd[1279]: time="2024-06-25T16:28:16.570063232Z" level=info msg="CreateContainer within sandbox \"6531e7e7df7803a4fec658c3f2aad4fbc338ea652c5e8e339217b15cd84127ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:28:16.591099 containerd[1279]: time="2024-06-25T16:28:16.591026190Z" level=info msg="CreateContainer within sandbox \"bcff9a0d732cf4ee484154d02b4dc14d22ea86322d2fb922ebee0e616c8fdd78\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4b7bc5c37d7fbe24ab60c444c93aed5b46ff49086a058a46c5c462ccbc98b3a1\"" Jun 25 16:28:16.593224 containerd[1279]: time="2024-06-25T16:28:16.593171139Z" level=info msg="StartContainer for \"4b7bc5c37d7fbe24ab60c444c93aed5b46ff49086a058a46c5c462ccbc98b3a1\"" Jun 25 16:28:16.610744 containerd[1279]: time="2024-06-25T16:28:16.608916534Z" level=info msg="CreateContainer within sandbox \"6531e7e7df7803a4fec658c3f2aad4fbc338ea652c5e8e339217b15cd84127ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c55948b4bf330ba9ddb5f995cd669254d92bf25037cf78a822b97094907f6b8d\"" Jun 25 16:28:16.611654 containerd[1279]: time="2024-06-25T16:28:16.611603164Z" level=info msg="StartContainer for \"c55948b4bf330ba9ddb5f995cd669254d92bf25037cf78a822b97094907f6b8d\"" Jun 25 16:28:16.614277 systemd[1]: Started cri-containerd-1b8974b6a088d339026d94ac1c38ba1a1d6c6ab3a369123260cddd3504360ab4.scope - libcontainer container 1b8974b6a088d339026d94ac1c38ba1a1d6c6ab3a369123260cddd3504360ab4. Jun 25 16:28:16.635000 audit: BPF prog-id=70 op=LOAD Jun 25 16:28:16.636000 audit: BPF prog-id=71 op=LOAD Jun 25 16:28:16.636000 audit[2063]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1948 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162383937346236613038386433333930323664393461633163333862 Jun 25 16:28:16.636000 audit: BPF prog-id=72 op=LOAD Jun 25 16:28:16.636000 audit[2063]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1948 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162383937346236613038386433333930323664393461633163333862 Jun 25 16:28:16.636000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:28:16.636000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:28:16.636000 audit: BPF prog-id=73 op=LOAD Jun 25 16:28:16.636000 audit[2063]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1948 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162383937346236613038386433333930323664393461633163333862 Jun 25 16:28:16.657315 systemd[1]: Started cri-containerd-4b7bc5c37d7fbe24ab60c444c93aed5b46ff49086a058a46c5c462ccbc98b3a1.scope - libcontainer container 4b7bc5c37d7fbe24ab60c444c93aed5b46ff49086a058a46c5c462ccbc98b3a1. Jun 25 16:28:16.705298 systemd[1]: Started cri-containerd-c55948b4bf330ba9ddb5f995cd669254d92bf25037cf78a822b97094907f6b8d.scope - libcontainer container c55948b4bf330ba9ddb5f995cd669254d92bf25037cf78a822b97094907f6b8d. Jun 25 16:28:16.722000 audit: BPF prog-id=74 op=LOAD Jun 25 16:28:16.723000 audit: BPF prog-id=75 op=LOAD Jun 25 16:28:16.723000 audit[2087]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1957 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462376263356333376437666265323461623630633434346339336165 Jun 25 16:28:16.724000 audit: BPF prog-id=76 op=LOAD Jun 25 16:28:16.724000 audit[2087]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1957 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462376263356333376437666265323461623630633434346339336165 Jun 25 16:28:16.725000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:28:16.725000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:28:16.725000 audit: BPF prog-id=77 op=LOAD Jun 25 16:28:16.725000 audit[2087]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1957 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462376263356333376437666265323461623630633434346339336165 Jun 25 16:28:16.735322 containerd[1279]: time="2024-06-25T16:28:16.735167600Z" level=info msg="StartContainer for \"1b8974b6a088d339026d94ac1c38ba1a1d6c6ab3a369123260cddd3504360ab4\" returns successfully" Jun 25 16:28:16.749000 audit: BPF prog-id=78 op=LOAD Jun 25 16:28:16.750000 audit: BPF prog-id=79 op=LOAD Jun 25 16:28:16.750000 audit[2105]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1944 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353934386234626633333062613964646235663939356364363639 Jun 25 16:28:16.750000 audit: BPF prog-id=80 op=LOAD Jun 25 16:28:16.750000 audit[2105]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1944 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353934386234626633333062613964646235663939356364363639 Jun 25 16:28:16.751000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:28:16.751000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:28:16.751000 audit: BPF prog-id=81 op=LOAD Jun 25 16:28:16.751000 audit[2105]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=1944 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:16.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353934386234626633333062613964646235663939356364363639 Jun 25 16:28:16.828395 containerd[1279]: time="2024-06-25T16:28:16.828320642Z" level=info msg="StartContainer for \"c55948b4bf330ba9ddb5f995cd669254d92bf25037cf78a822b97094907f6b8d\" returns successfully" Jun 25 16:28:16.834669 containerd[1279]: time="2024-06-25T16:28:16.834615187Z" level=info msg="StartContainer for \"4b7bc5c37d7fbe24ab60c444c93aed5b46ff49086a058a46c5c462ccbc98b3a1\" returns successfully" Jun 25 16:28:16.967973 kubelet[1892]: E0625 16:28:16.967912 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.972629 kubelet[1892]: E0625 16:28:16.972592 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:16.976011 kubelet[1892]: E0625 16:28:16.975979 1892 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.91.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.91.188:6443: connect: connection refused Jun 25 16:28:16.976600 kubelet[1892]: E0625 16:28:16.976570 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:17.504000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:17.504000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0000e4030 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:17.504000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:17.505000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:17.505000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c00021c1c0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:17.505000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:17.977254 kubelet[1892]: E0625 16:28:17.977021 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:17.977254 kubelet[1892]: E0625 16:28:17.977073 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:18.018402 kubelet[1892]: I0625 16:28:18.017711 1892 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:19.530000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.532483 kernel: kauditd_printk_skb: 137 callbacks suppressed Jun 25 16:28:19.532605 kernel: audit: type=1400 audit(1719332899.530:351): avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.530000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c009bbbad0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.537649 kernel: audit: type=1300 audit(1719332899.530:351): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c009bbbad0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.530000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.531000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.543974 kernel: audit: type=1327 audit(1719332899.530:351): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.544158 kernel: audit: type=1400 audit(1719332899.531:352): avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.531000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c006dae7e0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.548133 kernel: audit: type=1300 audit(1719332899.531:352): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c006dae7e0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.548323 kernel: audit: type=1327 audit(1719332899.531:352): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.531000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.533000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=524877 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.533000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0068a1d10 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.563862 kernel: audit: type=1400 audit(1719332899.533:353): avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=524877 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.564083 kernel: audit: type=1300 audit(1719332899.533:353): arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c0068a1d10 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.570141 kernel: audit: type=1327 audit(1719332899.533:353): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.533000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.540000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=524883 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.576984 kernel: audit: type=1400 audit(1719332899.540:354): avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=524883 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.540000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c009bbbd10 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.540000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.560000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.560000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4e a1=c005dc7980 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.560000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.560000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:19.560000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4e a1=c0061fc3f0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:28:19.560000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:28:19.749291 kubelet[1892]: E0625 16:28:19.742565 1892 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.4-a-1561673ea7\" not found" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:19.799602 kubelet[1892]: I0625 16:28:19.799432 1892 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:19.847185 kubelet[1892]: I0625 16:28:19.847000 1892 apiserver.go:52] "Watching apiserver" Jun 25 16:28:19.878449 kubelet[1892]: I0625 16:28:19.878411 1892 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:28:21.830063 kubelet[1892]: W0625 16:28:21.830009 1892 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:28:21.830930 kubelet[1892]: E0625 16:28:21.830903 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:21.984448 kubelet[1892]: E0625 16:28:21.984377 1892 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:23.329476 systemd[1]: Reloading. Jun 25 16:28:23.592885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:28:23.710000 audit: BPF prog-id=82 op=LOAD Jun 25 16:28:23.710000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:28:23.712000 audit: BPF prog-id=83 op=LOAD Jun 25 16:28:23.712000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:28:23.714000 audit: BPF prog-id=84 op=LOAD Jun 25 16:28:23.714000 audit: BPF prog-id=85 op=LOAD Jun 25 16:28:23.714000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:28:23.714000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:28:23.716000 audit: BPF prog-id=86 op=LOAD Jun 25 16:28:23.716000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:28:23.717000 audit: BPF prog-id=87 op=LOAD Jun 25 16:28:23.717000 audit: BPF prog-id=88 op=LOAD Jun 25 16:28:23.717000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:28:23.717000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:28:23.717000 audit: BPF prog-id=89 op=LOAD Jun 25 16:28:23.717000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:28:23.720000 audit: BPF prog-id=90 op=LOAD Jun 25 16:28:23.720000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:28:23.720000 audit: BPF prog-id=91 op=LOAD Jun 25 16:28:23.720000 audit: BPF prog-id=92 op=LOAD Jun 25 16:28:23.720000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:28:23.720000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:28:23.721000 audit: BPF prog-id=93 op=LOAD Jun 25 16:28:23.721000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:28:23.722000 audit: BPF prog-id=94 op=LOAD Jun 25 16:28:23.722000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:28:23.722000 audit: BPF prog-id=95 op=LOAD Jun 25 16:28:23.723000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:28:23.725000 audit: BPF prog-id=96 op=LOAD Jun 25 16:28:23.725000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:28:23.726000 audit: BPF prog-id=97 op=LOAD Jun 25 16:28:23.726000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:28:23.728000 audit: BPF prog-id=98 op=LOAD Jun 25 16:28:23.728000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:28:23.728000 audit: BPF prog-id=99 op=LOAD Jun 25 16:28:23.728000 audit: BPF prog-id=100 op=LOAD Jun 25 16:28:23.728000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:28:23.728000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:28:23.730000 audit: BPF prog-id=101 op=LOAD Jun 25 16:28:23.730000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:28:23.748599 kubelet[1892]: I0625 16:28:23.748552 1892 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:23.749205 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:23.774609 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:28:23.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:23.774924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:23.775059 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Jun 25 16:28:23.780517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:28:23.942662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:28:23.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:24.082243 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:24.082243 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:28:24.082243 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:28:24.082883 kubelet[2237]: I0625 16:28:24.082316 2237 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:28:24.091867 kubelet[2237]: I0625 16:28:24.091703 2237 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:28:24.091867 kubelet[2237]: I0625 16:28:24.091791 2237 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:28:24.094354 kubelet[2237]: I0625 16:28:24.094303 2237 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:28:24.098611 kubelet[2237]: I0625 16:28:24.097086 2237 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:28:24.098868 kubelet[2237]: I0625 16:28:24.098804 2237 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:28:24.121867 kubelet[2237]: I0625 16:28:24.121827 2237 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:28:24.122418 kubelet[2237]: I0625 16:28:24.122395 2237 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:28:24.122807 kubelet[2237]: I0625 16:28:24.122780 2237 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:28:24.123051 kubelet[2237]: I0625 16:28:24.123033 2237 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:28:24.123156 kubelet[2237]: I0625 16:28:24.123143 2237 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:28:24.123292 kubelet[2237]: I0625 16:28:24.123277 2237 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:24.123543 kubelet[2237]: I0625 16:28:24.123523 2237 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:28:24.123659 kubelet[2237]: I0625 16:28:24.123645 2237 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:28:24.123820 kubelet[2237]: I0625 16:28:24.123801 2237 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:28:24.123929 kubelet[2237]: I0625 16:28:24.123916 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:28:24.134494 kubelet[2237]: I0625 16:28:24.134452 2237 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:28:24.136201 kubelet[2237]: I0625 16:28:24.135554 2237 server.go:1232] "Started kubelet" Jun 25 16:28:24.155998 kubelet[2237]: I0625 16:28:24.150885 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:28:24.158639 kubelet[2237]: E0625 16:28:24.158402 2237 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:28:24.158639 kubelet[2237]: E0625 16:28:24.158477 2237 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:28:24.173256 kubelet[2237]: I0625 16:28:24.173202 2237 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:28:24.177053 kubelet[2237]: I0625 16:28:24.177009 2237 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:28:24.177508 kubelet[2237]: I0625 16:28:24.177483 2237 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:28:24.177677 kubelet[2237]: I0625 16:28:24.177658 2237 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:28:24.183577 kubelet[2237]: I0625 16:28:24.183535 2237 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:28:24.186138 kubelet[2237]: I0625 16:28:24.186100 2237 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:28:24.186309 kubelet[2237]: I0625 16:28:24.186289 2237 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:28:24.192386 kubelet[2237]: I0625 16:28:24.192340 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:28:24.200994 kubelet[2237]: I0625 16:28:24.199254 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:28:24.201354 kubelet[2237]: I0625 16:28:24.201279 2237 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:28:24.201527 kubelet[2237]: I0625 16:28:24.201511 2237 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:28:24.201728 kubelet[2237]: E0625 16:28:24.201714 2237 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:28:24.280298 kubelet[2237]: I0625 16:28:24.280268 2237 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.301105 kubelet[2237]: I0625 16:28:24.300478 2237 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.304377 kubelet[2237]: I0625 16:28:24.302704 2237 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.304377 kubelet[2237]: E0625 16:28:24.302865 2237 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:28:24.328828 kubelet[2237]: I0625 16:28:24.328789 2237 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:28:24.329118 kubelet[2237]: I0625 16:28:24.329100 2237 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:28:24.329491 kubelet[2237]: I0625 16:28:24.329447 2237 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:28:24.329856 kubelet[2237]: I0625 16:28:24.329844 2237 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:28:24.329986 kubelet[2237]: I0625 16:28:24.329974 2237 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:28:24.330056 kubelet[2237]: I0625 16:28:24.330049 2237 policy_none.go:49] "None policy: Start" Jun 25 16:28:24.331900 kubelet[2237]: I0625 16:28:24.331859 2237 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:28:24.332272 kubelet[2237]: I0625 16:28:24.332235 2237 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:28:24.332710 kubelet[2237]: I0625 16:28:24.332693 2237 state_mem.go:75] "Updated machine memory state" Jun 25 16:28:24.343456 kubelet[2237]: I0625 16:28:24.343400 2237 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:28:24.350390 kubelet[2237]: I0625 16:28:24.350319 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:28:24.504122 kubelet[2237]: I0625 16:28:24.503512 2237 topology_manager.go:215] "Topology Admit Handler" podUID="231b9f8260fd834d12abd7192e5e5595" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.504122 kubelet[2237]: I0625 16:28:24.503650 2237 topology_manager.go:215] "Topology Admit Handler" podUID="9fd729b90fdd5329c3262eb9d9e611e3" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.504122 kubelet[2237]: I0625 16:28:24.503684 2237 topology_manager.go:215] "Topology Admit Handler" podUID="1624108e96d1288f4f3743cdf9714df2" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.512184 kubelet[2237]: W0625 16:28:24.512140 2237 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:28:24.514807 kubelet[2237]: W0625 16:28:24.514755 2237 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:28:24.525045 kubelet[2237]: W0625 16:28:24.524998 2237 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 16:28:24.525457 kubelet[2237]: E0625 16:28:24.525413 2237 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" already exists" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.580643 kubelet[2237]: I0625 16:28:24.580589 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581012 kubelet[2237]: I0625 16:28:24.580989 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1624108e96d1288f4f3743cdf9714df2-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-1561673ea7\" (UID: \"1624108e96d1288f4f3743cdf9714df2\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581188 kubelet[2237]: I0625 16:28:24.581177 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581338 kubelet[2237]: I0625 16:28:24.581325 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581493 kubelet[2237]: I0625 16:28:24.581457 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/231b9f8260fd834d12abd7192e5e5595-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-1561673ea7\" (UID: \"231b9f8260fd834d12abd7192e5e5595\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581676 kubelet[2237]: I0625 16:28:24.581633 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581822 kubelet[2237]: I0625 16:28:24.581811 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.581912 kubelet[2237]: I0625 16:28:24.581904 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.582008 kubelet[2237]: I0625 16:28:24.581998 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fd729b90fdd5329c3262eb9d9e611e3-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-1561673ea7\" (UID: \"9fd729b90fdd5329c3262eb9d9e611e3\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" Jun 25 16:28:24.815002 kubelet[2237]: E0625 16:28:24.814826 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:24.815771 kubelet[2237]: E0625 16:28:24.815713 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:24.827211 kubelet[2237]: E0625 16:28:24.827165 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:25.130219 kubelet[2237]: I0625 16:28:25.130015 2237 apiserver.go:52] "Watching apiserver" Jun 25 16:28:25.178763 kubelet[2237]: I0625 16:28:25.178707 2237 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:28:25.266539 kubelet[2237]: E0625 16:28:25.266508 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:25.266822 kubelet[2237]: E0625 16:28:25.266708 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:25.267617 kubelet[2237]: E0625 16:28:25.267196 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:25.303816 kubelet[2237]: I0625 16:28:25.303774 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-1561673ea7" podStartSLOduration=4.303654578 podCreationTimestamp="2024-06-25 16:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:25.301052752 +0000 UTC m=+1.342364390" watchObservedRunningTime="2024-06-25 16:28:25.303654578 +0000 UTC m=+1.344966213" Jun 25 16:28:25.349579 kubelet[2237]: I0625 16:28:25.349525 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-1561673ea7" podStartSLOduration=1.349463005 podCreationTimestamp="2024-06-25 16:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:25.327601929 +0000 UTC m=+1.368913580" watchObservedRunningTime="2024-06-25 16:28:25.349463005 +0000 UTC m=+1.390774653" Jun 25 16:28:25.374652 kubelet[2237]: I0625 16:28:25.374606 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-1561673ea7" podStartSLOduration=1.374527206 podCreationTimestamp="2024-06-25 16:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:25.349779525 +0000 UTC m=+1.391091167" watchObservedRunningTime="2024-06-25 16:28:25.374527206 +0000 UTC m=+1.415838860" Jun 25 16:28:26.268267 kubelet[2237]: E0625 16:28:26.268225 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:28.321418 update_engine[1273]: I0625 16:28:28.321281 1273 update_attempter.cc:509] Updating boot flags... Jun 25 16:28:28.392991 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2302) Jun 25 16:28:29.068014 kubelet[2237]: E0625 16:28:29.067955 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:29.275005 kubelet[2237]: E0625 16:28:29.274966 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:29.829861 sudo[1429]: pam_unix(sudo:session): session closed for user root Jun 25 16:28:29.829000 audit[1429]: USER_END pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.831698 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 16:28:29.831816 kernel: audit: type=1106 audit(1719332909.829:399): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.837986 kernel: audit: type=1104 audit(1719332909.830:400): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.830000 audit[1429]: CRED_DISP pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.838729 sshd[1425]: pam_unix(sshd:session): session closed for user core Jun 25 16:28:29.842000 audit[1425]: USER_END pid=1425 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:29.850195 kernel: audit: type=1106 audit(1719332909.842:401): pid=1425 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:29.847541 systemd[1]: sshd@6-164.92.91.188:22-139.178.89.65:33228.service: Deactivated successfully. Jun 25 16:28:29.843000 audit[1425]: CRED_DISP pid=1425 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:29.848800 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:28:29.849081 systemd[1]: session-7.scope: Consumed 6.609s CPU time. Jun 25 16:28:29.851781 systemd-logind[1272]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:28:29.854017 systemd-logind[1272]: Removed session 7. Jun 25 16:28:29.859048 kernel: audit: type=1104 audit(1719332909.843:402): pid=1425 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:28:29.859243 kernel: audit: type=1131 audit(1719332909.847:403): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.91.188:22-139.178.89.65:33228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:29.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-164.92.91.188:22-139.178.89.65:33228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:28:30.211740 kubelet[2237]: E0625 16:28:30.210713 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:30.277326 kubelet[2237]: E0625 16:28:30.277288 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:34.168586 kubelet[2237]: E0625 16:28:34.168547 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:34.286411 kubelet[2237]: E0625 16:28:34.286361 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:35.287591 kubelet[2237]: E0625 16:28:35.287550 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:35.453000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=524908 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:28:35.459988 kernel: audit: type=1400 audit(1719332915.453:404): avc: denied { watch } for pid=2083 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=524908 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:28:35.453000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0006e9fc0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:35.467990 kernel: audit: type=1300 audit(1719332915.453:404): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c0006e9fc0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:35.453000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:35.474021 kernel: audit: type=1327 audit(1719332915.453:404): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.257000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.263987 kernel: audit: type=1400 audit(1719332916.257:405): avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.262000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.269988 kernel: audit: type=1400 audit(1719332916.262:406): avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.262000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e83340 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.276981 kernel: audit: type=1300 audit(1719332916.262:406): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e83340 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.262000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.283989 kernel: audit: type=1327 audit(1719332916.262:406): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.284192 kernel: audit: type=1400 audit(1719332916.263:407): avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.263000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.263000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e83380 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.263000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.299637 kernel: audit: type=1300 audit(1719332916.263:407): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e83380 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.299853 kernel: audit: type=1327 audit(1719332916.263:407): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.263000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:28:36.263000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000e833c0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.263000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:36.257000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0011b3fe0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:28:36.257000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:28:37.774322 kubelet[2237]: I0625 16:28:37.774265 2237 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:28:37.774902 containerd[1279]: time="2024-06-25T16:28:37.774844831Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:28:37.775494 kubelet[2237]: I0625 16:28:37.775462 2237 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:28:38.593642 kubelet[2237]: I0625 16:28:38.593578 2237 topology_manager.go:215] "Topology Admit Handler" podUID="f81298c5-0b9e-40ba-a71f-f5b73f8aad19" podNamespace="kube-system" podName="kube-proxy-p7xhm" Jun 25 16:28:38.618755 systemd[1]: Created slice kubepods-besteffort-podf81298c5_0b9e_40ba_a71f_f5b73f8aad19.slice - libcontainer container kubepods-besteffort-podf81298c5_0b9e_40ba_a71f_f5b73f8aad19.slice. Jun 25 16:28:38.702473 kubelet[2237]: I0625 16:28:38.702415 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81298c5-0b9e-40ba-a71f-f5b73f8aad19-xtables-lock\") pod \"kube-proxy-p7xhm\" (UID: \"f81298c5-0b9e-40ba-a71f-f5b73f8aad19\") " pod="kube-system/kube-proxy-p7xhm" Jun 25 16:28:38.702688 kubelet[2237]: I0625 16:28:38.702489 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81298c5-0b9e-40ba-a71f-f5b73f8aad19-lib-modules\") pod \"kube-proxy-p7xhm\" (UID: \"f81298c5-0b9e-40ba-a71f-f5b73f8aad19\") " pod="kube-system/kube-proxy-p7xhm" Jun 25 16:28:38.702688 kubelet[2237]: I0625 16:28:38.702529 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpcb\" (UniqueName: \"kubernetes.io/projected/f81298c5-0b9e-40ba-a71f-f5b73f8aad19-kube-api-access-rtpcb\") pod \"kube-proxy-p7xhm\" (UID: \"f81298c5-0b9e-40ba-a71f-f5b73f8aad19\") " pod="kube-system/kube-proxy-p7xhm" Jun 25 16:28:38.702688 kubelet[2237]: I0625 16:28:38.702561 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f81298c5-0b9e-40ba-a71f-f5b73f8aad19-kube-proxy\") pod \"kube-proxy-p7xhm\" (UID: \"f81298c5-0b9e-40ba-a71f-f5b73f8aad19\") " pod="kube-system/kube-proxy-p7xhm" Jun 25 16:28:38.762007 kubelet[2237]: I0625 16:28:38.761930 2237 topology_manager.go:215] "Topology Admit Handler" podUID="a8fdf58f-b56c-45ef-bc09-7154542b8a95" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-xgcvx" Jun 25 16:28:38.770453 systemd[1]: Created slice kubepods-besteffort-poda8fdf58f_b56c_45ef_bc09_7154542b8a95.slice - libcontainer container kubepods-besteffort-poda8fdf58f_b56c_45ef_bc09_7154542b8a95.slice. Jun 25 16:28:38.803188 kubelet[2237]: I0625 16:28:38.803136 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a8fdf58f-b56c-45ef-bc09-7154542b8a95-var-lib-calico\") pod \"tigera-operator-76c4974c85-xgcvx\" (UID: \"a8fdf58f-b56c-45ef-bc09-7154542b8a95\") " pod="tigera-operator/tigera-operator-76c4974c85-xgcvx" Jun 25 16:28:38.803831 kubelet[2237]: I0625 16:28:38.803809 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw5jc\" (UniqueName: \"kubernetes.io/projected/a8fdf58f-b56c-45ef-bc09-7154542b8a95-kube-api-access-bw5jc\") pod \"tigera-operator-76c4974c85-xgcvx\" (UID: \"a8fdf58f-b56c-45ef-bc09-7154542b8a95\") " pod="tigera-operator/tigera-operator-76c4974c85-xgcvx" Jun 25 16:28:38.928616 kubelet[2237]: E0625 16:28:38.928459 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:38.930519 containerd[1279]: time="2024-06-25T16:28:38.929987839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7xhm,Uid:f81298c5-0b9e-40ba-a71f-f5b73f8aad19,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:38.980341 containerd[1279]: time="2024-06-25T16:28:38.980031271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:38.980341 containerd[1279]: time="2024-06-25T16:28:38.980271436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:38.980341 containerd[1279]: time="2024-06-25T16:28:38.980311608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:38.980679 containerd[1279]: time="2024-06-25T16:28:38.980626813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:39.012316 systemd[1]: Started cri-containerd-f8a3ae205c6ff62f53d87492760ed3e378fff388525e2f8c5ecc21877daf600a.scope - libcontainer container f8a3ae205c6ff62f53d87492760ed3e378fff388525e2f8c5ecc21877daf600a. Jun 25 16:28:39.032000 audit: BPF prog-id=102 op=LOAD Jun 25 16:28:39.033000 audit: BPF prog-id=103 op=LOAD Jun 25 16:28:39.033000 audit[2344]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2334 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638613361653230356336666636326635336438373439323736306564 Jun 25 16:28:39.034000 audit: BPF prog-id=104 op=LOAD Jun 25 16:28:39.034000 audit[2344]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2334 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638613361653230356336666636326635336438373439323736306564 Jun 25 16:28:39.034000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:28:39.034000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:28:39.034000 audit: BPF prog-id=105 op=LOAD Jun 25 16:28:39.034000 audit[2344]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2334 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638613361653230356336666636326635336438373439323736306564 Jun 25 16:28:39.062679 containerd[1279]: time="2024-06-25T16:28:39.062441723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7xhm,Uid:f81298c5-0b9e-40ba-a71f-f5b73f8aad19,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8a3ae205c6ff62f53d87492760ed3e378fff388525e2f8c5ecc21877daf600a\"" Jun 25 16:28:39.064296 kubelet[2237]: E0625 16:28:39.064211 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:39.074408 containerd[1279]: time="2024-06-25T16:28:39.074323204Z" level=info msg="CreateContainer within sandbox \"f8a3ae205c6ff62f53d87492760ed3e378fff388525e2f8c5ecc21877daf600a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:28:39.077700 containerd[1279]: time="2024-06-25T16:28:39.077617200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xgcvx,Uid:a8fdf58f-b56c-45ef-bc09-7154542b8a95,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:28:39.133075 containerd[1279]: time="2024-06-25T16:28:39.132989541Z" level=info msg="CreateContainer within sandbox \"f8a3ae205c6ff62f53d87492760ed3e378fff388525e2f8c5ecc21877daf600a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"533a284adce3ca782152b603cd745f6aa3409bbab58aa826e23f157a42c8a986\"" Jun 25 16:28:39.136620 containerd[1279]: time="2024-06-25T16:28:39.136533890Z" level=info msg="StartContainer for \"533a284adce3ca782152b603cd745f6aa3409bbab58aa826e23f157a42c8a986\"" Jun 25 16:28:39.158212 containerd[1279]: time="2024-06-25T16:28:39.158042372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:39.158510 containerd[1279]: time="2024-06-25T16:28:39.158231979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:39.158510 containerd[1279]: time="2024-06-25T16:28:39.158281027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:39.158510 containerd[1279]: time="2024-06-25T16:28:39.158315398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:39.181259 systemd[1]: Started cri-containerd-533a284adce3ca782152b603cd745f6aa3409bbab58aa826e23f157a42c8a986.scope - libcontainer container 533a284adce3ca782152b603cd745f6aa3409bbab58aa826e23f157a42c8a986. Jun 25 16:28:39.192279 systemd[1]: Started cri-containerd-441cbba5640f199ae2ec80819e5fa4a03bd5ad6ef3f3a96d6b63ccc5b6fa9199.scope - libcontainer container 441cbba5640f199ae2ec80819e5fa4a03bd5ad6ef3f3a96d6b63ccc5b6fa9199. Jun 25 16:28:39.220000 audit: BPF prog-id=106 op=LOAD Jun 25 16:28:39.221000 audit: BPF prog-id=107 op=LOAD Jun 25 16:28:39.221000 audit[2394]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2380 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434316362626135363430663139396165326563383038313965356661 Jun 25 16:28:39.222000 audit: BPF prog-id=108 op=LOAD Jun 25 16:28:39.222000 audit[2394]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2380 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434316362626135363430663139396165326563383038313965356661 Jun 25 16:28:39.222000 audit: BPF prog-id=108 op=UNLOAD Jun 25 16:28:39.222000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:28:39.223000 audit: BPF prog-id=109 op=LOAD Jun 25 16:28:39.223000 audit[2394]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2380 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434316362626135363430663139396165326563383038313965356661 Jun 25 16:28:39.235000 audit: BPF prog-id=110 op=LOAD Jun 25 16:28:39.235000 audit[2393]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2334 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533336132383461646365336361373832313532623630336364373435 Jun 25 16:28:39.235000 audit: BPF prog-id=111 op=LOAD Jun 25 16:28:39.235000 audit[2393]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2334 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533336132383461646365336361373832313532623630336364373435 Jun 25 16:28:39.235000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:28:39.235000 audit: BPF prog-id=110 op=UNLOAD Jun 25 16:28:39.235000 audit: BPF prog-id=112 op=LOAD Jun 25 16:28:39.235000 audit[2393]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2334 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533336132383461646365336361373832313532623630336364373435 Jun 25 16:28:39.294970 containerd[1279]: time="2024-06-25T16:28:39.294884069Z" level=info msg="StartContainer for \"533a284adce3ca782152b603cd745f6aa3409bbab58aa826e23f157a42c8a986\" returns successfully" Jun 25 16:28:39.312530 containerd[1279]: time="2024-06-25T16:28:39.312464294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-xgcvx,Uid:a8fdf58f-b56c-45ef-bc09-7154542b8a95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"441cbba5640f199ae2ec80819e5fa4a03bd5ad6ef3f3a96d6b63ccc5b6fa9199\"" Jun 25 16:28:39.316480 containerd[1279]: time="2024-06-25T16:28:39.316421829Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:28:39.500000 audit[2469]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.500000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc66801ae0 a2=0 a3=7ffc66801acc items=0 ppid=2413 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:28:39.503000 audit[2470]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.503000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2a599f80 a2=0 a3=7fff2a599f6c items=0 ppid=2413 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:28:39.504000 audit[2471]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.504000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcee525b0 a2=0 a3=7fffcee5259c items=0 ppid=2413 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:28:39.506000 audit[2472]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.506000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0084d1c0 a2=0 a3=7ffe0084d1ac items=0 ppid=2413 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:28:39.507000 audit[2473]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.507000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4e136fd0 a2=0 a3=7ffc4e136fbc items=0 ppid=2413 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:28:39.512000 audit[2474]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.512000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc3ffca50 a2=0 a3=7ffcc3ffca3c items=0 ppid=2413 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:28:39.613000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.613000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeed5305f0 a2=0 a3=7ffeed5305dc items=0 ppid=2413 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:28:39.623000 audit[2477]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.623000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce5409020 a2=0 a3=7ffce540900c items=0 ppid=2413 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:28:39.634000 audit[2480]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.634000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff99298170 a2=0 a3=7fff9929815c items=0 ppid=2413 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.634000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:28:39.637000 audit[2481]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.637000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9bce1cc0 a2=0 a3=7ffc9bce1cac items=0 ppid=2413 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:28:39.643000 audit[2483]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.643000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff003bb7c0 a2=0 a3=7fff003bb7ac items=0 ppid=2413 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:28:39.646000 audit[2484]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.646000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbc0eee00 a2=0 a3=7ffcbc0eedec items=0 ppid=2413 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:28:39.653000 audit[2486]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.653000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffcce67f90 a2=0 a3=7fffcce67f7c items=0 ppid=2413 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:28:39.661000 audit[2489]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.661000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc413c04e0 a2=0 a3=7ffc413c04cc items=0 ppid=2413 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:28:39.665000 audit[2490]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.665000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddc018290 a2=0 a3=7ffddc01827c items=0 ppid=2413 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:28:39.675000 audit[2492]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.675000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff43a92d40 a2=0 a3=7fff43a92d2c items=0 ppid=2413 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:28:39.678000 audit[2493]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.678000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1072ca90 a2=0 a3=7ffe1072ca7c items=0 ppid=2413 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:28:39.685000 audit[2495]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.685000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc524390f0 a2=0 a3=7ffc524390dc items=0 ppid=2413 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.685000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:39.695000 audit[2498]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.695000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8d9e21d0 a2=0 a3=7ffc8d9e21bc items=0 ppid=2413 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:39.705000 audit[2501]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.705000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2575ddd0 a2=0 a3=7ffd2575ddbc items=0 ppid=2413 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:28:39.707000 audit[2502]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.707000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd95ed9720 a2=0 a3=7ffd95ed970c items=0 ppid=2413 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:28:39.715000 audit[2504]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.715000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe3d0edfc0 a2=0 a3=7ffe3d0edfac items=0 ppid=2413 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:39.724000 audit[2507]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.724000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfef39460 a2=0 a3=7ffdfef3944c items=0 ppid=2413 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:39.727000 audit[2508]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.727000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde8f916d0 a2=0 a3=7ffde8f916bc items=0 ppid=2413 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:28:39.733000 audit[2510]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:28:39.733000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe1c12a710 a2=0 a3=7ffe1c12a6fc items=0 ppid=2413 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:28:39.771000 audit[2516]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:39.771000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff1604e8b0 a2=0 a3=7fff1604e89c items=0 ppid=2413 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:39.790000 audit[2516]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:39.790000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff1604e8b0 a2=0 a3=7fff1604e89c items=0 ppid=2413 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:39.794000 audit[2522]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.794000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffea6f3b100 a2=0 a3=7ffea6f3b0ec items=0 ppid=2413 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:28:39.800000 audit[2524]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.800000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe412ade50 a2=0 a3=7ffe412ade3c items=0 ppid=2413 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:28:39.810000 audit[2527]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.810000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc2fa601e0 a2=0 a3=7ffc2fa601cc items=0 ppid=2413 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:28:39.816000 audit[2528]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.816000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0ba42f40 a2=0 a3=7ffc0ba42f2c items=0 ppid=2413 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:28:39.824000 audit[2530]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.824000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdeee0e170 a2=0 a3=7ffdeee0e15c items=0 ppid=2413 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:28:39.827000 audit[2531]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.827000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1c5af850 a2=0 a3=7fff1c5af83c items=0 ppid=2413 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:28:39.841000 audit[2533]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.841000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcbda9e2a0 a2=0 a3=7ffcbda9e28c items=0 ppid=2413 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:28:39.851000 audit[2536]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.851000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcd5e90db0 a2=0 a3=7ffcd5e90d9c items=0 ppid=2413 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:28:39.853000 audit[2537]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.853000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc348fedf0 a2=0 a3=7ffc348feddc items=0 ppid=2413 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:28:39.861000 audit[2539]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.861000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf0db43d0 a2=0 a3=7ffcf0db43bc items=0 ppid=2413 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:28:39.865000 audit[2540]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.865000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2c977120 a2=0 a3=7fff2c97710c items=0 ppid=2413 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:28:39.872000 audit[2542]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.872000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdafc68720 a2=0 a3=7ffdafc6870c items=0 ppid=2413 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:28:39.887000 audit[2545]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.887000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdcdaba50 a2=0 a3=7fffdcdaba3c items=0 ppid=2413 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:28:39.897000 audit[2548]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.897000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd3a52e20 a2=0 a3=7fffd3a52e0c items=0 ppid=2413 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:28:39.900000 audit[2549]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.900000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc0c06930 a2=0 a3=7ffcc0c0691c items=0 ppid=2413 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:28:39.906000 audit[2551]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.906000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc81cf54c0 a2=0 a3=7ffc81cf54ac items=0 ppid=2413 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:39.914000 audit[2554]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.914000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe3ce6ffb0 a2=0 a3=7ffe3ce6ff9c items=0 ppid=2413 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:28:39.916000 audit[2555]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.916000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8af96370 a2=0 a3=7ffc8af9635c items=0 ppid=2413 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:28:39.922000 audit[2557]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.922000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed3911830 a2=0 a3=7ffed391181c items=0 ppid=2413 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:28:39.925000 audit[2558]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.925000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea1e8acb0 a2=0 a3=7ffea1e8ac9c items=0 ppid=2413 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:28:39.931000 audit[2560]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.931000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff6b0fbef0 a2=0 a3=7fff6b0fbedc items=0 ppid=2413 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:39.939000 audit[2563]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:28:39.939000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff9c5f8d50 a2=0 a3=7fff9c5f8d3c items=0 ppid=2413 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:28:39.947000 audit[2565]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:28:39.947000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffe77760c70 a2=0 a3=7ffe77760c5c items=0 ppid=2413 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.947000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:39.948000 audit[2565]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:28:39.948000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe77760c70 a2=0 a3=7ffe77760c5c items=0 ppid=2413 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:39.948000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:40.307077 kubelet[2237]: E0625 16:28:40.306441 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:41.034227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207495595.mount: Deactivated successfully. Jun 25 16:28:42.567675 containerd[1279]: time="2024-06-25T16:28:42.567623327Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:42.569525 containerd[1279]: time="2024-06-25T16:28:42.569454802Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076072" Jun 25 16:28:42.569962 containerd[1279]: time="2024-06-25T16:28:42.569899551Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:42.573012 containerd[1279]: time="2024-06-25T16:28:42.572913010Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:42.576247 containerd[1279]: time="2024-06-25T16:28:42.576157284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:42.578987 containerd[1279]: time="2024-06-25T16:28:42.578881009Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.262133716s" Jun 25 16:28:42.578987 containerd[1279]: time="2024-06-25T16:28:42.578972842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:28:42.584228 containerd[1279]: time="2024-06-25T16:28:42.584012946Z" level=info msg="CreateContainer within sandbox \"441cbba5640f199ae2ec80819e5fa4a03bd5ad6ef3f3a96d6b63ccc5b6fa9199\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:28:42.606089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount372659388.mount: Deactivated successfully. Jun 25 16:28:42.622300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532349634.mount: Deactivated successfully. Jun 25 16:28:42.627805 containerd[1279]: time="2024-06-25T16:28:42.627711359Z" level=info msg="CreateContainer within sandbox \"441cbba5640f199ae2ec80819e5fa4a03bd5ad6ef3f3a96d6b63ccc5b6fa9199\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"760fb21988766138b491069ea80fc9dd2df9ec4c3c701d33c209f62e8571fdc7\"" Jun 25 16:28:42.630534 containerd[1279]: time="2024-06-25T16:28:42.629291522Z" level=info msg="StartContainer for \"760fb21988766138b491069ea80fc9dd2df9ec4c3c701d33c209f62e8571fdc7\"" Jun 25 16:28:42.672338 systemd[1]: Started cri-containerd-760fb21988766138b491069ea80fc9dd2df9ec4c3c701d33c209f62e8571fdc7.scope - libcontainer container 760fb21988766138b491069ea80fc9dd2df9ec4c3c701d33c209f62e8571fdc7. Jun 25 16:28:42.689000 audit: BPF prog-id=113 op=LOAD Jun 25 16:28:42.691750 kernel: kauditd_printk_skb: 193 callbacks suppressed Jun 25 16:28:42.691899 kernel: audit: type=1334 audit(1719332922.689:477): prog-id=113 op=LOAD Jun 25 16:28:42.692000 audit: BPF prog-id=114 op=LOAD Jun 25 16:28:42.696079 kernel: audit: type=1334 audit(1719332922.692:478): prog-id=114 op=LOAD Jun 25 16:28:42.692000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2380 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.701234 kernel: audit: type=1300 audit(1719332922.692:478): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2380 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736306662323139383837363631333862343931303639656138306663 Jun 25 16:28:42.706123 kernel: audit: type=1327 audit(1719332922.692:478): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736306662323139383837363631333862343931303639656138306663 Jun 25 16:28:42.692000 audit: BPF prog-id=115 op=LOAD Jun 25 16:28:42.708512 kernel: audit: type=1334 audit(1719332922.692:479): prog-id=115 op=LOAD Jun 25 16:28:42.708675 kernel: audit: type=1300 audit(1719332922.692:479): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2380 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.692000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2380 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736306662323139383837363631333862343931303639656138306663 Jun 25 16:28:42.719097 kernel: audit: type=1327 audit(1719332922.692:479): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736306662323139383837363631333862343931303639656138306663 Jun 25 16:28:42.719399 kernel: audit: type=1334 audit(1719332922.692:480): prog-id=115 op=UNLOAD Jun 25 16:28:42.692000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:28:42.692000 audit: BPF prog-id=114 op=UNLOAD Jun 25 16:28:42.723105 kernel: audit: type=1334 audit(1719332922.692:481): prog-id=114 op=UNLOAD Jun 25 16:28:42.723260 kernel: audit: type=1334 audit(1719332922.692:482): prog-id=116 op=LOAD Jun 25 16:28:42.692000 audit: BPF prog-id=116 op=LOAD Jun 25 16:28:42.692000 audit[2583]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2380 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:42.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736306662323139383837363631333862343931303639656138306663 Jun 25 16:28:42.747528 containerd[1279]: time="2024-06-25T16:28:42.747443645Z" level=info msg="StartContainer for \"760fb21988766138b491069ea80fc9dd2df9ec4c3c701d33c209f62e8571fdc7\" returns successfully" Jun 25 16:28:43.328196 kubelet[2237]: I0625 16:28:43.328149 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p7xhm" podStartSLOduration=5.328090229 podCreationTimestamp="2024-06-25 16:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:40.340388165 +0000 UTC m=+16.381699817" watchObservedRunningTime="2024-06-25 16:28:43.328090229 +0000 UTC m=+19.369401885" Jun 25 16:28:43.328823 kubelet[2237]: I0625 16:28:43.328267 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-xgcvx" podStartSLOduration=2.06360409 podCreationTimestamp="2024-06-25 16:28:38 +0000 UTC" firstStartedPulling="2024-06-25 16:28:39.315124774 +0000 UTC m=+15.356436400" lastFinishedPulling="2024-06-25 16:28:42.579762338 +0000 UTC m=+18.621073983" observedRunningTime="2024-06-25 16:28:43.327085126 +0000 UTC m=+19.368396770" watchObservedRunningTime="2024-06-25 16:28:43.328241673 +0000 UTC m=+19.369553321" Jun 25 16:28:45.799000 audit[2617]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:45.799000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc1b681f00 a2=0 a3=7ffc1b681eec items=0 ppid=2413 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:45.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:45.801000 audit[2617]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:45.801000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1b681f00 a2=0 a3=0 items=0 ppid=2413 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:45.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:45.815000 audit[2619]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:45.815000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffef656000 a2=0 a3=7fffef655fec items=0 ppid=2413 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:45.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:45.816000 audit[2619]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2619 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:45.816000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffef656000 a2=0 a3=0 items=0 ppid=2413 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:45.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:45.988405 kubelet[2237]: I0625 16:28:45.988340 2237 topology_manager.go:215] "Topology Admit Handler" podUID="fb2d6566-f559-4969-87b0-6e5e80a2a6a0" podNamespace="calico-system" podName="calico-typha-867d8f6cb7-ck5hc" Jun 25 16:28:45.997647 systemd[1]: Created slice kubepods-besteffort-podfb2d6566_f559_4969_87b0_6e5e80a2a6a0.slice - libcontainer container kubepods-besteffort-podfb2d6566_f559_4969_87b0_6e5e80a2a6a0.slice. Jun 25 16:28:46.075269 kubelet[2237]: I0625 16:28:46.075095 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8jp\" (UniqueName: \"kubernetes.io/projected/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-kube-api-access-gp8jp\") pod \"calico-typha-867d8f6cb7-ck5hc\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " pod="calico-system/calico-typha-867d8f6cb7-ck5hc" Jun 25 16:28:46.075438 kubelet[2237]: I0625 16:28:46.075287 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-tigera-ca-bundle\") pod \"calico-typha-867d8f6cb7-ck5hc\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " pod="calico-system/calico-typha-867d8f6cb7-ck5hc" Jun 25 16:28:46.075438 kubelet[2237]: I0625 16:28:46.075331 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-typha-certs\") pod \"calico-typha-867d8f6cb7-ck5hc\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " pod="calico-system/calico-typha-867d8f6cb7-ck5hc" Jun 25 16:28:46.154349 kubelet[2237]: I0625 16:28:46.154290 2237 topology_manager.go:215] "Topology Admit Handler" podUID="e62e6527-6ada-40a9-9d88-f5a264ae7e74" podNamespace="calico-system" podName="calico-node-dbkkk" Jun 25 16:28:46.164677 systemd[1]: Created slice kubepods-besteffort-pode62e6527_6ada_40a9_9d88_f5a264ae7e74.slice - libcontainer container kubepods-besteffort-pode62e6527_6ada_40a9_9d88_f5a264ae7e74.slice. Jun 25 16:28:46.277069 kubelet[2237]: I0625 16:28:46.277013 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-run-calico\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277069 kubelet[2237]: I0625 16:28:46.277087 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-net-dir\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277380 kubelet[2237]: I0625 16:28:46.277139 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mv8q\" (UniqueName: \"kubernetes.io/projected/e62e6527-6ada-40a9-9d88-f5a264ae7e74-kube-api-access-2mv8q\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277380 kubelet[2237]: I0625 16:28:46.277184 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62e6527-6ada-40a9-9d88-f5a264ae7e74-tigera-ca-bundle\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277380 kubelet[2237]: I0625 16:28:46.277227 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-xtables-lock\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277380 kubelet[2237]: I0625 16:28:46.277269 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-flexvol-driver-host\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277380 kubelet[2237]: I0625 16:28:46.277302 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-lib-modules\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277613 kubelet[2237]: I0625 16:28:46.277339 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-log-dir\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277613 kubelet[2237]: I0625 16:28:46.277378 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-policysync\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277613 kubelet[2237]: I0625 16:28:46.277411 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-lib-calico\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277613 kubelet[2237]: I0625 16:28:46.277441 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-bin-dir\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.277613 kubelet[2237]: I0625 16:28:46.277479 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e62e6527-6ada-40a9-9d88-f5a264ae7e74-node-certs\") pod \"calico-node-dbkkk\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " pod="calico-system/calico-node-dbkkk" Jun 25 16:28:46.298733 kubelet[2237]: I0625 16:28:46.298655 2237 topology_manager.go:215] "Topology Admit Handler" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" podNamespace="calico-system" podName="csi-node-driver-q78rw" Jun 25 16:28:46.299223 kubelet[2237]: E0625 16:28:46.299169 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:46.310149 kubelet[2237]: E0625 16:28:46.310094 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:46.311315 containerd[1279]: time="2024-06-25T16:28:46.311167863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867d8f6cb7-ck5hc,Uid:fb2d6566-f559-4969-87b0-6e5e80a2a6a0,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:46.380589 kubelet[2237]: I0625 16:28:46.378711 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a3072e8-f1d4-4a0b-a333-9167360f3eb4-kubelet-dir\") pod \"csi-node-driver-q78rw\" (UID: \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\") " pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:46.380589 kubelet[2237]: I0625 16:28:46.379010 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6b5h\" (UniqueName: \"kubernetes.io/projected/1a3072e8-f1d4-4a0b-a333-9167360f3eb4-kube-api-access-g6b5h\") pod \"csi-node-driver-q78rw\" (UID: \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\") " pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:46.380589 kubelet[2237]: I0625 16:28:46.379254 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a3072e8-f1d4-4a0b-a333-9167360f3eb4-registration-dir\") pod \"csi-node-driver-q78rw\" (UID: \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\") " pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:46.380589 kubelet[2237]: I0625 16:28:46.379416 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a3072e8-f1d4-4a0b-a333-9167360f3eb4-varrun\") pod \"csi-node-driver-q78rw\" (UID: \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\") " pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:46.380589 kubelet[2237]: I0625 16:28:46.379476 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a3072e8-f1d4-4a0b-a333-9167360f3eb4-socket-dir\") pod \"csi-node-driver-q78rw\" (UID: \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\") " pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:46.383170 kubelet[2237]: E0625 16:28:46.382426 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.383170 kubelet[2237]: W0625 16:28:46.382455 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.383170 kubelet[2237]: E0625 16:28:46.382531 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.383684 kubelet[2237]: E0625 16:28:46.383494 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.383684 kubelet[2237]: W0625 16:28:46.383515 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.383684 kubelet[2237]: E0625 16:28:46.383560 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.384139 kubelet[2237]: E0625 16:28:46.384119 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.384273 kubelet[2237]: W0625 16:28:46.384254 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.384569 kubelet[2237]: E0625 16:28:46.384546 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.384855 kubelet[2237]: E0625 16:28:46.384809 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.391558 kubelet[2237]: W0625 16:28:46.391493 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.391845 kubelet[2237]: E0625 16:28:46.391819 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.392493 kubelet[2237]: E0625 16:28:46.392466 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.392638 kubelet[2237]: W0625 16:28:46.392613 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.392774 kubelet[2237]: E0625 16:28:46.392755 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.398408 kubelet[2237]: E0625 16:28:46.398367 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.405042 kubelet[2237]: W0625 16:28:46.404984 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.405334 kubelet[2237]: E0625 16:28:46.405308 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.409988 kubelet[2237]: E0625 16:28:46.405849 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.409988 kubelet[2237]: W0625 16:28:46.405875 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.409988 kubelet[2237]: E0625 16:28:46.405907 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.409988 kubelet[2237]: E0625 16:28:46.406235 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.409988 kubelet[2237]: W0625 16:28:46.406247 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.409988 kubelet[2237]: E0625 16:28:46.406265 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.439126 kubelet[2237]: E0625 16:28:46.439085 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.439126 kubelet[2237]: W0625 16:28:46.439114 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.439416 kubelet[2237]: E0625 16:28:46.439150 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.440649 containerd[1279]: time="2024-06-25T16:28:46.440424700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:46.444318 containerd[1279]: time="2024-06-25T16:28:46.440694637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:46.444716 containerd[1279]: time="2024-06-25T16:28:46.444610663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:46.444829 containerd[1279]: time="2024-06-25T16:28:46.444761051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:46.472008 kubelet[2237]: E0625 16:28:46.471554 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:46.472360 containerd[1279]: time="2024-06-25T16:28:46.472304142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbkkk,Uid:e62e6527-6ada-40a9-9d88-f5a264ae7e74,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:46.483111 kubelet[2237]: E0625 16:28:46.482846 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.483111 kubelet[2237]: W0625 16:28:46.482875 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.483111 kubelet[2237]: E0625 16:28:46.482903 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.483860 kubelet[2237]: E0625 16:28:46.483575 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.483860 kubelet[2237]: W0625 16:28:46.483674 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.483860 kubelet[2237]: E0625 16:28:46.483704 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.484249 kubelet[2237]: E0625 16:28:46.484110 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.484249 kubelet[2237]: W0625 16:28:46.484124 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.484249 kubelet[2237]: E0625 16:28:46.484147 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.484578 kubelet[2237]: E0625 16:28:46.484479 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.484578 kubelet[2237]: W0625 16:28:46.484492 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.484578 kubelet[2237]: E0625 16:28:46.484534 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.484883 kubelet[2237]: E0625 16:28:46.484766 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.484883 kubelet[2237]: W0625 16:28:46.484780 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.484883 kubelet[2237]: E0625 16:28:46.484821 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.485222 kubelet[2237]: E0625 16:28:46.485066 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.485222 kubelet[2237]: W0625 16:28:46.485081 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.485222 kubelet[2237]: E0625 16:28:46.485108 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.485469 kubelet[2237]: E0625 16:28:46.485452 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.485566 kubelet[2237]: W0625 16:28:46.485549 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.485674 kubelet[2237]: E0625 16:28:46.485659 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.487149 kubelet[2237]: E0625 16:28:46.487121 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.488580 kubelet[2237]: W0625 16:28:46.487313 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.488893 kubelet[2237]: E0625 16:28:46.488846 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.489676 kubelet[2237]: E0625 16:28:46.489649 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.489897 kubelet[2237]: W0625 16:28:46.489849 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.496781 kubelet[2237]: E0625 16:28:46.494970 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.498185 kubelet[2237]: E0625 16:28:46.498154 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.498589 kubelet[2237]: W0625 16:28:46.498364 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.498913 kubelet[2237]: E0625 16:28:46.498894 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.499050 kubelet[2237]: W0625 16:28:46.499030 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.499438 kubelet[2237]: E0625 16:28:46.499417 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.499581 kubelet[2237]: W0625 16:28:46.499562 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.499990 kubelet[2237]: E0625 16:28:46.499973 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.500118 kubelet[2237]: W0625 16:28:46.500099 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.500439 kubelet[2237]: E0625 16:28:46.500422 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.500556 kubelet[2237]: W0625 16:28:46.500540 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.500662 kubelet[2237]: E0625 16:28:46.500646 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.504245 kubelet[2237]: E0625 16:28:46.504202 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.504485 kubelet[2237]: W0625 16:28:46.504459 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.504621 kubelet[2237]: E0625 16:28:46.504602 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.505740 systemd[1]: Started cri-containerd-9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28.scope - libcontainer container 9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28. Jun 25 16:28:46.506072 kubelet[2237]: E0625 16:28:46.506049 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.506271 kubelet[2237]: W0625 16:28:46.506250 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.506369 kubelet[2237]: E0625 16:28:46.506356 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.506683 kubelet[2237]: E0625 16:28:46.506651 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.507282 kubelet[2237]: E0625 16:28:46.507254 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.507464 kubelet[2237]: W0625 16:28:46.507439 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.507613 kubelet[2237]: E0625 16:28:46.507595 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.508049 kubelet[2237]: E0625 16:28:46.508029 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.508201 kubelet[2237]: W0625 16:28:46.508178 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.508362 kubelet[2237]: E0625 16:28:46.508343 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.508748 kubelet[2237]: E0625 16:28:46.508728 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.508878 kubelet[2237]: W0625 16:28:46.508856 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.509067 kubelet[2237]: E0625 16:28:46.509049 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.509515 kubelet[2237]: E0625 16:28:46.509498 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.509654 kubelet[2237]: W0625 16:28:46.509633 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.509807 kubelet[2237]: E0625 16:28:46.509790 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.509983 kubelet[2237]: E0625 16:28:46.509967 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.510358 kubelet[2237]: E0625 16:28:46.510341 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.510494 kubelet[2237]: W0625 16:28:46.510475 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.510621 kubelet[2237]: E0625 16:28:46.510603 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.513773 kubelet[2237]: E0625 16:28:46.511032 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.513773 kubelet[2237]: W0625 16:28:46.513455 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.513773 kubelet[2237]: E0625 16:28:46.513495 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.515987 kubelet[2237]: E0625 16:28:46.515354 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.515987 kubelet[2237]: W0625 16:28:46.515382 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.515987 kubelet[2237]: E0625 16:28:46.515418 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.515987 kubelet[2237]: E0625 16:28:46.515786 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.516622 kubelet[2237]: E0625 16:28:46.516392 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.516622 kubelet[2237]: W0625 16:28:46.516410 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.516622 kubelet[2237]: E0625 16:28:46.516435 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.517906 kubelet[2237]: E0625 16:28:46.516877 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.517906 kubelet[2237]: W0625 16:28:46.516892 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.517906 kubelet[2237]: E0625 16:28:46.516913 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.517906 kubelet[2237]: E0625 16:28:46.516973 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.542637 kubelet[2237]: E0625 16:28:46.542590 2237 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:28:46.542637 kubelet[2237]: W0625 16:28:46.542626 2237 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:28:46.542910 kubelet[2237]: E0625 16:28:46.542661 2237 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:28:46.566582 containerd[1279]: time="2024-06-25T16:28:46.566431156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:46.566848 containerd[1279]: time="2024-06-25T16:28:46.566601248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:46.566848 containerd[1279]: time="2024-06-25T16:28:46.566638144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:46.566848 containerd[1279]: time="2024-06-25T16:28:46.566703200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:46.583000 audit: BPF prog-id=117 op=LOAD Jun 25 16:28:46.583000 audit: BPF prog-id=118 op=LOAD Jun 25 16:28:46.583000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2639 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643665616238376266656638346632363731376137383737343730 Jun 25 16:28:46.584000 audit: BPF prog-id=119 op=LOAD Jun 25 16:28:46.584000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2639 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643665616238376266656638346632363731376137383737343730 Jun 25 16:28:46.584000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:28:46.587000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:28:46.587000 audit: BPF prog-id=120 op=LOAD Jun 25 16:28:46.587000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2639 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962643665616238376266656638346632363731376137383737343730 Jun 25 16:28:46.608276 systemd[1]: Started cri-containerd-c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf.scope - libcontainer container c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf. Jun 25 16:28:46.634000 audit: BPF prog-id=121 op=LOAD Jun 25 16:28:46.640000 audit: BPF prog-id=122 op=LOAD Jun 25 16:28:46.640000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2701 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333616337363034646464386561333066303433393765643462386430 Jun 25 16:28:46.640000 audit: BPF prog-id=123 op=LOAD Jun 25 16:28:46.640000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2701 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333616337363034646464386561333066303433393765643462386430 Jun 25 16:28:46.640000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:28:46.641000 audit: BPF prog-id=122 op=UNLOAD Jun 25 16:28:46.641000 audit: BPF prog-id=124 op=LOAD Jun 25 16:28:46.641000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2701 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333616337363034646464386561333066303433393765643462386430 Jun 25 16:28:46.689601 containerd[1279]: time="2024-06-25T16:28:46.689555288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbkkk,Uid:e62e6527-6ada-40a9-9d88-f5a264ae7e74,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\"" Jun 25 16:28:46.696029 kubelet[2237]: E0625 16:28:46.694887 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:46.698464 containerd[1279]: time="2024-06-25T16:28:46.698409719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:28:46.701083 containerd[1279]: time="2024-06-25T16:28:46.700922149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867d8f6cb7-ck5hc,Uid:fb2d6566-f559-4969-87b0-6e5e80a2a6a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\"" Jun 25 16:28:46.703531 kubelet[2237]: E0625 16:28:46.702826 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:46.830000 audit[2742]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:46.830000 audit[2742]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff418315d0 a2=0 a3=7fff418315bc items=0 ppid=2413 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:46.832000 audit[2742]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:46.832000 audit[2742]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff418315d0 a2=0 a3=0 items=0 ppid=2413 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:46.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:47.859984 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:28:47.860172 kernel: audit: type=1325 audit(1719332927.854:501): table=filter:95 family=2 entries=16 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:47.854000 audit[2744]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:47.854000 audit[2744]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff7a07c2b0 a2=0 a3=7fff7a07c29c items=0 ppid=2413 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.867385 kernel: audit: type=1300 audit(1719332927.854:501): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff7a07c2b0 a2=0 a3=7fff7a07c29c items=0 ppid=2413 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.854000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:47.871454 kernel: audit: type=1327 audit(1719332927.854:501): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:47.855000 audit[2744]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:47.875117 kernel: audit: type=1325 audit(1719332927.855:502): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2744 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:47.855000 audit[2744]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7a07c2b0 a2=0 a3=0 items=0 ppid=2413 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.880007 kernel: audit: type=1300 audit(1719332927.855:502): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7a07c2b0 a2=0 a3=0 items=0 ppid=2413 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:47.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:47.882968 kernel: audit: type=1327 audit(1719332927.855:502): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:48.117643 containerd[1279]: time="2024-06-25T16:28:48.117457917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:48.118553 containerd[1279]: time="2024-06-25T16:28:48.118470796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:28:48.121140 containerd[1279]: time="2024-06-25T16:28:48.121073281Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:48.124601 containerd[1279]: time="2024-06-25T16:28:48.124535434Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:48.128909 containerd[1279]: time="2024-06-25T16:28:48.128823168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:48.130472 containerd[1279]: time="2024-06-25T16:28:48.130389417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.43166203s" Jun 25 16:28:48.130682 containerd[1279]: time="2024-06-25T16:28:48.130469194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:28:48.132316 containerd[1279]: time="2024-06-25T16:28:48.131876671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:28:48.135766 containerd[1279]: time="2024-06-25T16:28:48.135686186Z" level=info msg="CreateContainer within sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:28:48.188055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191203362.mount: Deactivated successfully. Jun 25 16:28:48.195901 containerd[1279]: time="2024-06-25T16:28:48.195819468Z" level=info msg="CreateContainer within sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\"" Jun 25 16:28:48.198253 containerd[1279]: time="2024-06-25T16:28:48.198192280Z" level=info msg="StartContainer for \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\"" Jun 25 16:28:48.203141 kubelet[2237]: E0625 16:28:48.202991 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:48.287699 systemd[1]: run-containerd-runc-k8s.io-b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf-runc.QWqlsk.mount: Deactivated successfully. Jun 25 16:28:48.299448 systemd[1]: Started cri-containerd-b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf.scope - libcontainer container b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf. Jun 25 16:28:48.345540 kernel: audit: type=1334 audit(1719332928.336:503): prog-id=125 op=LOAD Jun 25 16:28:48.345698 kernel: audit: type=1300 audit(1719332928.336:503): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2701 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:48.336000 audit: BPF prog-id=125 op=LOAD Jun 25 16:28:48.336000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2701 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:48.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643463363832323932623764303334643265373835663035373762 Jun 25 16:28:48.352009 kernel: audit: type=1327 audit(1719332928.336:503): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643463363832323932623764303334643265373835663035373762 Jun 25 16:28:48.336000 audit: BPF prog-id=126 op=LOAD Jun 25 16:28:48.358097 kernel: audit: type=1334 audit(1719332928.336:504): prog-id=126 op=LOAD Jun 25 16:28:48.336000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2701 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:48.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643463363832323932623764303334643265373835663035373762 Jun 25 16:28:48.336000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:28:48.336000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:28:48.336000 audit: BPF prog-id=127 op=LOAD Jun 25 16:28:48.336000 audit[2758]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2701 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:48.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239643463363832323932623764303334643265373835663035373762 Jun 25 16:28:48.384192 containerd[1279]: time="2024-06-25T16:28:48.384035834Z" level=info msg="StartContainer for \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\" returns successfully" Jun 25 16:28:48.404054 systemd[1]: cri-containerd-b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf.scope: Deactivated successfully. Jun 25 16:28:48.409000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:28:48.438266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf-rootfs.mount: Deactivated successfully. Jun 25 16:28:48.524221 containerd[1279]: time="2024-06-25T16:28:48.524137877Z" level=info msg="shim disconnected" id=b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf namespace=k8s.io Jun 25 16:28:48.524221 containerd[1279]: time="2024-06-25T16:28:48.524216661Z" level=warning msg="cleaning up after shim disconnected" id=b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf namespace=k8s.io Jun 25 16:28:48.524221 containerd[1279]: time="2024-06-25T16:28:48.524227735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:49.337739 containerd[1279]: time="2024-06-25T16:28:49.337676335Z" level=info msg="StopPodSandbox for \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\"" Jun 25 16:28:49.343825 containerd[1279]: time="2024-06-25T16:28:49.337785709Z" level=info msg="Container to stop \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:28:49.341067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf-shm.mount: Deactivated successfully. Jun 25 16:28:49.382000 audit: BPF prog-id=121 op=UNLOAD Jun 25 16:28:49.386000 audit: BPF prog-id=124 op=UNLOAD Jun 25 16:28:49.383241 systemd[1]: cri-containerd-c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf.scope: Deactivated successfully. Jun 25 16:28:49.439682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf-rootfs.mount: Deactivated successfully. Jun 25 16:28:49.464816 containerd[1279]: time="2024-06-25T16:28:49.464729767Z" level=info msg="shim disconnected" id=c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf namespace=k8s.io Jun 25 16:28:49.466477 containerd[1279]: time="2024-06-25T16:28:49.466416261Z" level=warning msg="cleaning up after shim disconnected" id=c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf namespace=k8s.io Jun 25 16:28:49.466685 containerd[1279]: time="2024-06-25T16:28:49.466662117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:49.523845 containerd[1279]: time="2024-06-25T16:28:49.523775671Z" level=info msg="TearDown network for sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" successfully" Jun 25 16:28:49.524137 containerd[1279]: time="2024-06-25T16:28:49.524094912Z" level=info msg="StopPodSandbox for \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" returns successfully" Jun 25 16:28:49.642485 kubelet[2237]: I0625 16:28:49.642309 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-flexvol-driver-host\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.642485 kubelet[2237]: I0625 16:28:49.642382 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-policysync\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.642485 kubelet[2237]: I0625 16:28:49.642416 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-lib-modules\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.642485 kubelet[2237]: I0625 16:28:49.642450 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-log-dir\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.642485 kubelet[2237]: I0625 16:28:49.642482 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-net-dir\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642524 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62e6527-6ada-40a9-9d88-f5a264ae7e74-tigera-ca-bundle\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642561 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e62e6527-6ada-40a9-9d88-f5a264ae7e74-node-certs\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642593 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mv8q\" (UniqueName: \"kubernetes.io/projected/e62e6527-6ada-40a9-9d88-f5a264ae7e74-kube-api-access-2mv8q\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642642 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-lib-calico\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642671 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-xtables-lock\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.643847 kubelet[2237]: I0625 16:28:49.642708 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-bin-dir\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.644170 kubelet[2237]: I0625 16:28:49.642745 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-run-calico\") pod \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\" (UID: \"e62e6527-6ada-40a9-9d88-f5a264ae7e74\") " Jun 25 16:28:49.644170 kubelet[2237]: I0625 16:28:49.642878 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.644170 kubelet[2237]: I0625 16:28:49.643026 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.644170 kubelet[2237]: I0625 16:28:49.643075 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-policysync" (OuterVolumeSpecName: "policysync") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.644170 kubelet[2237]: I0625 16:28:49.643108 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.645698 kubelet[2237]: I0625 16:28:49.643133 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.645698 kubelet[2237]: I0625 16:28:49.643182 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.645698 kubelet[2237]: I0625 16:28:49.643742 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62e6527-6ada-40a9-9d88-f5a264ae7e74-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:28:49.646811 kubelet[2237]: I0625 16:28:49.646001 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.652280 systemd[1]: var-lib-kubelet-pods-e62e6527\x2d6ada\x2d40a9\x2d9d88\x2df5a264ae7e74-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:28:49.659247 kubelet[2237]: I0625 16:28:49.656387 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62e6527-6ada-40a9-9d88-f5a264ae7e74-node-certs" (OuterVolumeSpecName: "node-certs") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:28:49.659247 kubelet[2237]: I0625 16:28:49.656501 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.659247 kubelet[2237]: I0625 16:28:49.656525 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:28:49.663307 systemd[1]: var-lib-kubelet-pods-e62e6527\x2d6ada\x2d40a9\x2d9d88\x2df5a264ae7e74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mv8q.mount: Deactivated successfully. Jun 25 16:28:49.666080 kubelet[2237]: I0625 16:28:49.666026 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62e6527-6ada-40a9-9d88-f5a264ae7e74-kube-api-access-2mv8q" (OuterVolumeSpecName: "kube-api-access-2mv8q") pod "e62e6527-6ada-40a9-9d88-f5a264ae7e74" (UID: "e62e6527-6ada-40a9-9d88-f5a264ae7e74"). InnerVolumeSpecName "kube-api-access-2mv8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:28:49.747919 kubelet[2237]: I0625 16:28:49.747863 2237 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-flexvol-driver-host\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.748327 kubelet[2237]: I0625 16:28:49.748300 2237 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-policysync\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.748481 kubelet[2237]: I0625 16:28:49.748459 2237 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-lib-modules\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.748638 kubelet[2237]: I0625 16:28:49.748617 2237 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-log-dir\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.748780 kubelet[2237]: I0625 16:28:49.748763 2237 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e62e6527-6ada-40a9-9d88-f5a264ae7e74-tigera-ca-bundle\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.748903 kubelet[2237]: I0625 16:28:49.748882 2237 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-net-dir\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.749062 kubelet[2237]: I0625 16:28:49.749046 2237 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e62e6527-6ada-40a9-9d88-f5a264ae7e74-node-certs\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.749291 kubelet[2237]: I0625 16:28:49.749273 2237 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2mv8q\" (UniqueName: \"kubernetes.io/projected/e62e6527-6ada-40a9-9d88-f5a264ae7e74-kube-api-access-2mv8q\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.750875 kubelet[2237]: I0625 16:28:49.750840 2237 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-lib-calico\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.751660 kubelet[2237]: I0625 16:28:49.751615 2237 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-xtables-lock\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.752252 kubelet[2237]: I0625 16:28:49.752229 2237 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-cni-bin-dir\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:49.752869 kubelet[2237]: I0625 16:28:49.752837 2237 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e62e6527-6ada-40a9-9d88-f5a264ae7e74-var-run-calico\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:50.204853 kubelet[2237]: E0625 16:28:50.204805 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:50.237143 systemd[1]: Removed slice kubepods-besteffort-pode62e6527_6ada_40a9_9d88_f5a264ae7e74.slice - libcontainer container kubepods-besteffort-pode62e6527_6ada_40a9_9d88_f5a264ae7e74.slice. Jun 25 16:28:50.346196 kubelet[2237]: I0625 16:28:50.346156 2237 scope.go:117] "RemoveContainer" containerID="b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf" Jun 25 16:28:50.351185 containerd[1279]: time="2024-06-25T16:28:50.351103310Z" level=info msg="RemoveContainer for \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\"" Jun 25 16:28:50.356708 containerd[1279]: time="2024-06-25T16:28:50.356623672Z" level=info msg="RemoveContainer for \"b9d4c682292b7d034d2e785f0577b39218346481047b56c102dd64092dd80ddf\" returns successfully" Jun 25 16:28:50.426110 kubelet[2237]: I0625 16:28:50.426058 2237 topology_manager.go:215] "Topology Admit Handler" podUID="4934c231-2f46-42ab-a190-38643a91be54" podNamespace="calico-system" podName="calico-node-v88d5" Jun 25 16:28:50.427560 kubelet[2237]: E0625 16:28:50.427502 2237 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e62e6527-6ada-40a9-9d88-f5a264ae7e74" containerName="flexvol-driver" Jun 25 16:28:50.427768 kubelet[2237]: I0625 16:28:50.427674 2237 memory_manager.go:346] "RemoveStaleState removing state" podUID="e62e6527-6ada-40a9-9d88-f5a264ae7e74" containerName="flexvol-driver" Jun 25 16:28:50.440864 systemd[1]: Created slice kubepods-besteffort-pod4934c231_2f46_42ab_a190_38643a91be54.slice - libcontainer container kubepods-besteffort-pod4934c231_2f46_42ab_a190_38643a91be54.slice. Jun 25 16:28:50.560728 kubelet[2237]: I0625 16:28:50.560546 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4934c231-2f46-42ab-a190-38643a91be54-tigera-ca-bundle\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.560728 kubelet[2237]: I0625 16:28:50.560621 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-cni-bin-dir\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.561736 kubelet[2237]: I0625 16:28:50.561094 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-flexvol-driver-host\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.561736 kubelet[2237]: I0625 16:28:50.561162 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-cni-log-dir\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.561736 kubelet[2237]: I0625 16:28:50.561194 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-lib-modules\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.561736 kubelet[2237]: I0625 16:28:50.561231 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-xtables-lock\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.561736 kubelet[2237]: I0625 16:28:50.561267 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-var-lib-calico\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.562109 kubelet[2237]: I0625 16:28:50.561305 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl28g\" (UniqueName: \"kubernetes.io/projected/4934c231-2f46-42ab-a190-38643a91be54-kube-api-access-jl28g\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.562109 kubelet[2237]: I0625 16:28:50.561342 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4934c231-2f46-42ab-a190-38643a91be54-node-certs\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.562109 kubelet[2237]: I0625 16:28:50.561375 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-var-run-calico\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.562109 kubelet[2237]: I0625 16:28:50.561415 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-policysync\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.562109 kubelet[2237]: I0625 16:28:50.561447 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4934c231-2f46-42ab-a190-38643a91be54-cni-net-dir\") pod \"calico-node-v88d5\" (UID: \"4934c231-2f46-42ab-a190-38643a91be54\") " pod="calico-system/calico-node-v88d5" Jun 25 16:28:50.745872 kubelet[2237]: E0625 16:28:50.745010 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:50.747592 containerd[1279]: time="2024-06-25T16:28:50.747520931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v88d5,Uid:4934c231-2f46-42ab-a190-38643a91be54,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:50.875918 containerd[1279]: time="2024-06-25T16:28:50.872910037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:50.875918 containerd[1279]: time="2024-06-25T16:28:50.873048472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:50.875918 containerd[1279]: time="2024-06-25T16:28:50.873084908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:50.875918 containerd[1279]: time="2024-06-25T16:28:50.873113912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:50.947276 systemd[1]: Started cri-containerd-81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515.scope - libcontainer container 81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515. Jun 25 16:28:51.004000 audit: BPF prog-id=128 op=LOAD Jun 25 16:28:51.005000 audit: BPF prog-id=129 op=LOAD Jun 25 16:28:51.005000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2861 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831636537393764373366656136653136316338643133373530346565 Jun 25 16:28:51.007000 audit: BPF prog-id=130 op=LOAD Jun 25 16:28:51.007000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2861 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831636537393764373366656136653136316338643133373530346565 Jun 25 16:28:51.007000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:28:51.007000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:28:51.007000 audit: BPF prog-id=131 op=LOAD Jun 25 16:28:51.007000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2861 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831636537393764373366656136653136316338643133373530346565 Jun 25 16:28:51.051864 containerd[1279]: time="2024-06-25T16:28:51.051777679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v88d5,Uid:4934c231-2f46-42ab-a190-38643a91be54,Namespace:calico-system,Attempt:0,} returns sandbox id \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\"" Jun 25 16:28:51.054561 kubelet[2237]: E0625 16:28:51.053538 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:51.059361 containerd[1279]: time="2024-06-25T16:28:51.059298805Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:28:51.109100 containerd[1279]: time="2024-06-25T16:28:51.109016154Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb\"" Jun 25 16:28:51.113540 containerd[1279]: time="2024-06-25T16:28:51.113476817Z" level=info msg="StartContainer for \"6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb\"" Jun 25 16:28:51.224229 systemd[1]: Started cri-containerd-6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb.scope - libcontainer container 6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb. Jun 25 16:28:51.299000 audit: BPF prog-id=132 op=LOAD Jun 25 16:28:51.299000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2861 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373366393962373938313162666635373436623162656265646266 Jun 25 16:28:51.300000 audit: BPF prog-id=133 op=LOAD Jun 25 16:28:51.300000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2861 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373366393962373938313162666635373436623162656265646266 Jun 25 16:28:51.302000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:28:51.302000 audit: BPF prog-id=132 op=UNLOAD Jun 25 16:28:51.302000 audit: BPF prog-id=134 op=LOAD Jun 25 16:28:51.302000 audit[2903]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2861 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662373366393962373938313162666635373436623162656265646266 Jun 25 16:28:51.408912 containerd[1279]: time="2024-06-25T16:28:51.408857507Z" level=info msg="StartContainer for \"6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb\" returns successfully" Jun 25 16:28:51.422846 containerd[1279]: time="2024-06-25T16:28:51.422763561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:51.424623 containerd[1279]: time="2024-06-25T16:28:51.424536882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:28:51.427978 containerd[1279]: time="2024-06-25T16:28:51.427892177Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:51.431011 containerd[1279]: time="2024-06-25T16:28:51.430927539Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:51.433191 containerd[1279]: time="2024-06-25T16:28:51.432346123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:51.433526 containerd[1279]: time="2024-06-25T16:28:51.433111501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.30117128s" Jun 25 16:28:51.433679 containerd[1279]: time="2024-06-25T16:28:51.433655013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:28:51.451018 containerd[1279]: time="2024-06-25T16:28:51.449401052Z" level=info msg="CreateContainer within sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:28:51.462590 systemd[1]: cri-containerd-6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb.scope: Deactivated successfully. Jun 25 16:28:51.465000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:28:51.486997 containerd[1279]: time="2024-06-25T16:28:51.486814722Z" level=info msg="CreateContainer within sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\"" Jun 25 16:28:51.489722 containerd[1279]: time="2024-06-25T16:28:51.489671822Z" level=info msg="StartContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\"" Jun 25 16:28:51.540404 containerd[1279]: time="2024-06-25T16:28:51.540327344Z" level=info msg="shim disconnected" id=6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb namespace=k8s.io Jun 25 16:28:51.541047 containerd[1279]: time="2024-06-25T16:28:51.540920341Z" level=warning msg="cleaning up after shim disconnected" id=6b73f99b79811bff5746b1bebedbfd5cf25c256692588bc42760357453138dfb namespace=k8s.io Jun 25 16:28:51.541234 containerd[1279]: time="2024-06-25T16:28:51.541208847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:51.568253 systemd[1]: Started cri-containerd-b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316.scope - libcontainer container b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316. Jun 25 16:28:51.607000 audit: BPF prog-id=135 op=LOAD Jun 25 16:28:51.608000 audit: BPF prog-id=136 op=LOAD Jun 25 16:28:51.608000 audit[2954]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2639 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233383935396138643362373733366337653933346137303338613963 Jun 25 16:28:51.609000 audit: BPF prog-id=137 op=LOAD Jun 25 16:28:51.609000 audit[2954]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2639 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233383935396138643362373733366337653933346137303338613963 Jun 25 16:28:51.609000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:28:51.609000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:28:51.609000 audit: BPF prog-id=138 op=LOAD Jun 25 16:28:51.609000 audit[2954]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2639 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:51.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233383935396138643362373733366337653933346137303338613963 Jun 25 16:28:51.662215 containerd[1279]: time="2024-06-25T16:28:51.662143666Z" level=info msg="StartContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" returns successfully" Jun 25 16:28:51.675121 systemd[1]: run-containerd-runc-k8s.io-81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515-runc.IDG68h.mount: Deactivated successfully. Jun 25 16:28:52.203985 kubelet[2237]: E0625 16:28:52.203545 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:52.209205 kubelet[2237]: I0625 16:28:52.209141 2237 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e62e6527-6ada-40a9-9d88-f5a264ae7e74" path="/var/lib/kubelet/pods/e62e6527-6ada-40a9-9d88-f5a264ae7e74/volumes" Jun 25 16:28:52.362260 containerd[1279]: time="2024-06-25T16:28:52.362187022Z" level=info msg="StopContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" with timeout 300 (s)" Jun 25 16:28:52.363054 containerd[1279]: time="2024-06-25T16:28:52.362988532Z" level=info msg="Stop container \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" with signal terminated" Jun 25 16:28:52.365429 kubelet[2237]: E0625 16:28:52.365389 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:52.367444 containerd[1279]: time="2024-06-25T16:28:52.366170667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:28:52.396079 kubelet[2237]: I0625 16:28:52.396029 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-867d8f6cb7-ck5hc" podStartSLOduration=2.66741283 podCreationTimestamp="2024-06-25 16:28:45 +0000 UTC" firstStartedPulling="2024-06-25 16:28:46.705879748 +0000 UTC m=+22.747191371" lastFinishedPulling="2024-06-25 16:28:51.434432109 +0000 UTC m=+27.475743755" observedRunningTime="2024-06-25 16:28:52.390758245 +0000 UTC m=+28.432069886" watchObservedRunningTime="2024-06-25 16:28:52.395965214 +0000 UTC m=+28.437276895" Jun 25 16:28:52.405577 systemd[1]: cri-containerd-b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316.scope: Deactivated successfully. Jun 25 16:28:52.404000 audit: BPF prog-id=135 op=UNLOAD Jun 25 16:28:52.408000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:28:52.460038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316-rootfs.mount: Deactivated successfully. Jun 25 16:28:52.472718 containerd[1279]: time="2024-06-25T16:28:52.472642121Z" level=info msg="shim disconnected" id=b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316 namespace=k8s.io Jun 25 16:28:52.472718 containerd[1279]: time="2024-06-25T16:28:52.472709037Z" level=warning msg="cleaning up after shim disconnected" id=b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316 namespace=k8s.io Jun 25 16:28:52.472718 containerd[1279]: time="2024-06-25T16:28:52.472719014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:52.501234 containerd[1279]: time="2024-06-25T16:28:52.501162081Z" level=info msg="StopContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" returns successfully" Jun 25 16:28:52.502614 containerd[1279]: time="2024-06-25T16:28:52.502566402Z" level=info msg="StopPodSandbox for \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\"" Jun 25 16:28:52.502926 containerd[1279]: time="2024-06-25T16:28:52.502890280Z" level=info msg="Container to stop \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:28:52.505892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28-shm.mount: Deactivated successfully. Jun 25 16:28:52.518000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:28:52.519235 systemd[1]: cri-containerd-9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28.scope: Deactivated successfully. Jun 25 16:28:52.522000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:28:52.553686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28-rootfs.mount: Deactivated successfully. Jun 25 16:28:52.561236 containerd[1279]: time="2024-06-25T16:28:52.561153545Z" level=info msg="shim disconnected" id=9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28 namespace=k8s.io Jun 25 16:28:52.561595 containerd[1279]: time="2024-06-25T16:28:52.561567119Z" level=warning msg="cleaning up after shim disconnected" id=9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28 namespace=k8s.io Jun 25 16:28:52.561703 containerd[1279]: time="2024-06-25T16:28:52.561689825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:52.592378 containerd[1279]: time="2024-06-25T16:28:52.592326614Z" level=info msg="TearDown network for sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" successfully" Jun 25 16:28:52.592576 containerd[1279]: time="2024-06-25T16:28:52.592557763Z" level=info msg="StopPodSandbox for \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" returns successfully" Jun 25 16:28:52.643276 kubelet[2237]: I0625 16:28:52.643222 2237 topology_manager.go:215] "Topology Admit Handler" podUID="dd8aeaed-4c4d-4e70-8036-3d30a0921efb" podNamespace="calico-system" podName="calico-typha-85f7d8fc68-6zl42" Jun 25 16:28:52.643708 kubelet[2237]: E0625 16:28:52.643680 2237 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fb2d6566-f559-4969-87b0-6e5e80a2a6a0" containerName="calico-typha" Jun 25 16:28:52.643857 kubelet[2237]: I0625 16:28:52.643841 2237 memory_manager.go:346] "RemoveStaleState removing state" podUID="fb2d6566-f559-4969-87b0-6e5e80a2a6a0" containerName="calico-typha" Jun 25 16:28:52.651880 systemd[1]: Created slice kubepods-besteffort-poddd8aeaed_4c4d_4e70_8036_3d30a0921efb.slice - libcontainer container kubepods-besteffort-poddd8aeaed_4c4d_4e70_8036_3d30a0921efb.slice. Jun 25 16:28:52.663000 audit[3071]: NETFILTER_CFG table=filter:97 family=2 entries=16 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:52.663000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd8411cb20 a2=0 a3=7ffd8411cb0c items=0 ppid=2413 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:52.664000 audit[3071]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:52.664000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8411cb20 a2=0 a3=0 items=0 ppid=2413 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:52.686196 kubelet[2237]: I0625 16:28:52.686148 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-typha-certs\") pod \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " Jun 25 16:28:52.686392 kubelet[2237]: I0625 16:28:52.686234 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp8jp\" (UniqueName: \"kubernetes.io/projected/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-kube-api-access-gp8jp\") pod \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " Jun 25 16:28:52.686392 kubelet[2237]: I0625 16:28:52.686296 2237 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-tigera-ca-bundle\") pod \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\" (UID: \"fb2d6566-f559-4969-87b0-6e5e80a2a6a0\") " Jun 25 16:28:52.696818 systemd[1]: var-lib-kubelet-pods-fb2d6566\x2df559\x2d4969\x2d87b0\x2d6e5e80a2a6a0-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 16:28:52.703878 systemd[1]: var-lib-kubelet-pods-fb2d6566\x2df559\x2d4969\x2d87b0\x2d6e5e80a2a6a0-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 16:28:52.708492 kubelet[2237]: I0625 16:28:52.708414 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "fb2d6566-f559-4969-87b0-6e5e80a2a6a0" (UID: "fb2d6566-f559-4969-87b0-6e5e80a2a6a0"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:28:52.710024 kubelet[2237]: I0625 16:28:52.709748 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "fb2d6566-f559-4969-87b0-6e5e80a2a6a0" (UID: "fb2d6566-f559-4969-87b0-6e5e80a2a6a0"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:28:52.717781 kubelet[2237]: I0625 16:28:52.713146 2237 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-kube-api-access-gp8jp" (OuterVolumeSpecName: "kube-api-access-gp8jp") pod "fb2d6566-f559-4969-87b0-6e5e80a2a6a0" (UID: "fb2d6566-f559-4969-87b0-6e5e80a2a6a0"). InnerVolumeSpecName "kube-api-access-gp8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:28:52.714658 systemd[1]: var-lib-kubelet-pods-fb2d6566\x2df559\x2d4969\x2d87b0\x2d6e5e80a2a6a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgp8jp.mount: Deactivated successfully. Jun 25 16:28:52.729000 audit[3076]: NETFILTER_CFG table=filter:99 family=2 entries=16 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:52.729000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe84d0df00 a2=0 a3=7ffe84d0deec items=0 ppid=2413 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:52.730000 audit[3076]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:28:52.730000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe84d0df00 a2=0 a3=0 items=0 ppid=2413 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:52.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:28:52.786998 kubelet[2237]: I0625 16:28:52.786875 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7549\" (UniqueName: \"kubernetes.io/projected/dd8aeaed-4c4d-4e70-8036-3d30a0921efb-kube-api-access-m7549\") pod \"calico-typha-85f7d8fc68-6zl42\" (UID: \"dd8aeaed-4c4d-4e70-8036-3d30a0921efb\") " pod="calico-system/calico-typha-85f7d8fc68-6zl42" Jun 25 16:28:52.786998 kubelet[2237]: I0625 16:28:52.786955 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd8aeaed-4c4d-4e70-8036-3d30a0921efb-typha-certs\") pod \"calico-typha-85f7d8fc68-6zl42\" (UID: \"dd8aeaed-4c4d-4e70-8036-3d30a0921efb\") " pod="calico-system/calico-typha-85f7d8fc68-6zl42" Jun 25 16:28:52.786998 kubelet[2237]: I0625 16:28:52.786986 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd8aeaed-4c4d-4e70-8036-3d30a0921efb-tigera-ca-bundle\") pod \"calico-typha-85f7d8fc68-6zl42\" (UID: \"dd8aeaed-4c4d-4e70-8036-3d30a0921efb\") " pod="calico-system/calico-typha-85f7d8fc68-6zl42" Jun 25 16:28:52.786998 kubelet[2237]: I0625 16:28:52.787017 2237 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gp8jp\" (UniqueName: \"kubernetes.io/projected/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-kube-api-access-gp8jp\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:52.787614 kubelet[2237]: I0625 16:28:52.787031 2237 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-typha-certs\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:52.787614 kubelet[2237]: I0625 16:28:52.787042 2237 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb2d6566-f559-4969-87b0-6e5e80a2a6a0-tigera-ca-bundle\") on node \"ci-3815.2.4-a-1561673ea7\" DevicePath \"\"" Jun 25 16:28:52.957760 kubelet[2237]: E0625 16:28:52.957708 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:52.958888 containerd[1279]: time="2024-06-25T16:28:52.958846667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f7d8fc68-6zl42,Uid:dd8aeaed-4c4d-4e70-8036-3d30a0921efb,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:52.994248 containerd[1279]: time="2024-06-25T16:28:52.994000218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:28:52.994248 containerd[1279]: time="2024-06-25T16:28:52.994173842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:52.994802 containerd[1279]: time="2024-06-25T16:28:52.994727169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:28:52.994919 containerd[1279]: time="2024-06-25T16:28:52.994842544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:28:53.025228 systemd[1]: Started cri-containerd-caba7899b22fdaf653640d465968ea5e30c9d7d8eaa7d6035850985c8df52a60.scope - libcontainer container caba7899b22fdaf653640d465968ea5e30c9d7d8eaa7d6035850985c8df52a60. Jun 25 16:28:53.049000 audit: BPF prog-id=139 op=LOAD Jun 25 16:28:53.051415 kernel: kauditd_printk_skb: 62 callbacks suppressed Jun 25 16:28:53.051520 kernel: audit: type=1334 audit(1719332933.049:537): prog-id=139 op=LOAD Jun 25 16:28:53.052000 audit: BPF prog-id=140 op=LOAD Jun 25 16:28:53.052000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3087 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.057371 kernel: audit: type=1334 audit(1719332933.052:538): prog-id=140 op=LOAD Jun 25 16:28:53.057448 kernel: audit: type=1300 audit(1719332933.052:538): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3087 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361626137383939623232666461663635333634306434363539363865 Jun 25 16:28:53.052000 audit: BPF prog-id=141 op=LOAD Jun 25 16:28:53.066527 kernel: audit: type=1327 audit(1719332933.052:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361626137383939623232666461663635333634306434363539363865 Jun 25 16:28:53.066631 kernel: audit: type=1334 audit(1719332933.052:539): prog-id=141 op=LOAD Jun 25 16:28:53.052000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3087 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.069331 kernel: audit: type=1300 audit(1719332933.052:539): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3087 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361626137383939623232666461663635333634306434363539363865 Jun 25 16:28:53.052000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:28:53.078615 kernel: audit: type=1327 audit(1719332933.052:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361626137383939623232666461663635333634306434363539363865 Jun 25 16:28:53.078732 kernel: audit: type=1334 audit(1719332933.052:540): prog-id=141 op=UNLOAD Jun 25 16:28:53.052000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:28:53.080356 kernel: audit: type=1334 audit(1719332933.052:541): prog-id=140 op=UNLOAD Jun 25 16:28:53.053000 audit: BPF prog-id=142 op=LOAD Jun 25 16:28:53.082982 kernel: audit: type=1334 audit(1719332933.053:542): prog-id=142 op=LOAD Jun 25 16:28:53.053000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3087 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361626137383939623232666461663635333634306434363539363865 Jun 25 16:28:53.142328 containerd[1279]: time="2024-06-25T16:28:53.142263470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f7d8fc68-6zl42,Uid:dd8aeaed-4c4d-4e70-8036-3d30a0921efb,Namespace:calico-system,Attempt:0,} returns sandbox id \"caba7899b22fdaf653640d465968ea5e30c9d7d8eaa7d6035850985c8df52a60\"" Jun 25 16:28:53.145558 kubelet[2237]: E0625 16:28:53.145509 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:53.161718 containerd[1279]: time="2024-06-25T16:28:53.161674315Z" level=info msg="CreateContainer within sandbox \"caba7899b22fdaf653640d465968ea5e30c9d7d8eaa7d6035850985c8df52a60\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:28:53.183423 containerd[1279]: time="2024-06-25T16:28:53.183277238Z" level=info msg="CreateContainer within sandbox \"caba7899b22fdaf653640d465968ea5e30c9d7d8eaa7d6035850985c8df52a60\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7d0de3038727d620aa6b6cb9d46e7a8a0ef82d78e899c755ba25ce35514331ca\"" Jun 25 16:28:53.184566 containerd[1279]: time="2024-06-25T16:28:53.184514562Z" level=info msg="StartContainer for \"7d0de3038727d620aa6b6cb9d46e7a8a0ef82d78e899c755ba25ce35514331ca\"" Jun 25 16:28:53.236300 systemd[1]: Started cri-containerd-7d0de3038727d620aa6b6cb9d46e7a8a0ef82d78e899c755ba25ce35514331ca.scope - libcontainer container 7d0de3038727d620aa6b6cb9d46e7a8a0ef82d78e899c755ba25ce35514331ca. Jun 25 16:28:53.255000 audit: BPF prog-id=143 op=LOAD Jun 25 16:28:53.257000 audit: BPF prog-id=144 op=LOAD Jun 25 16:28:53.257000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3087 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764306465333033383732376436323061613662366362396434366537 Jun 25 16:28:53.257000 audit: BPF prog-id=145 op=LOAD Jun 25 16:28:53.257000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3087 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764306465333033383732376436323061613662366362396434366537 Jun 25 16:28:53.257000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:28:53.257000 audit: BPF prog-id=144 op=UNLOAD Jun 25 16:28:53.257000 audit: BPF prog-id=146 op=LOAD Jun 25 16:28:53.257000 audit[3130]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3087 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:53.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764306465333033383732376436323061613662366362396434366537 Jun 25 16:28:53.306984 containerd[1279]: time="2024-06-25T16:28:53.306841655Z" level=info msg="StartContainer for \"7d0de3038727d620aa6b6cb9d46e7a8a0ef82d78e899c755ba25ce35514331ca\" returns successfully" Jun 25 16:28:53.370318 kubelet[2237]: I0625 16:28:53.368790 2237 scope.go:117] "RemoveContainer" containerID="b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316" Jun 25 16:28:53.377650 containerd[1279]: time="2024-06-25T16:28:53.377189733Z" level=info msg="RemoveContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\"" Jun 25 16:28:53.379806 kubelet[2237]: E0625 16:28:53.379753 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:53.380869 systemd[1]: Removed slice kubepods-besteffort-podfb2d6566_f559_4969_87b0_6e5e80a2a6a0.slice - libcontainer container kubepods-besteffort-podfb2d6566_f559_4969_87b0_6e5e80a2a6a0.slice. Jun 25 16:28:53.389109 containerd[1279]: time="2024-06-25T16:28:53.387768828Z" level=info msg="RemoveContainer for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" returns successfully" Jun 25 16:28:53.393376 kubelet[2237]: I0625 16:28:53.393339 2237 scope.go:117] "RemoveContainer" containerID="b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316" Jun 25 16:28:53.394328 containerd[1279]: time="2024-06-25T16:28:53.394146190Z" level=error msg="ContainerStatus for \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\": not found" Jun 25 16:28:53.394632 kubelet[2237]: E0625 16:28:53.394594 2237 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\": not found" containerID="b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316" Jun 25 16:28:53.394717 kubelet[2237]: I0625 16:28:53.394685 2237 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316"} err="failed to get container status \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\": rpc error: code = NotFound desc = an error occurred when try to find container \"b38959a8d3b7736c7e934a7038a9cace86e65ae3fc47bca03ee82d88fb21c316\": not found" Jun 25 16:28:53.411636 kubelet[2237]: I0625 16:28:53.411571 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-85f7d8fc68-6zl42" podStartSLOduration=6.41148609 podCreationTimestamp="2024-06-25 16:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:28:53.397998467 +0000 UTC m=+29.439310119" watchObservedRunningTime="2024-06-25 16:28:53.41148609 +0000 UTC m=+29.452797735" Jun 25 16:28:54.206990 kubelet[2237]: E0625 16:28:54.205175 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:54.214527 kubelet[2237]: I0625 16:28:54.214490 2237 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fb2d6566-f559-4969-87b0-6e5e80a2a6a0" path="/var/lib/kubelet/pods/fb2d6566-f559-4969-87b0-6e5e80a2a6a0/volumes" Jun 25 16:28:56.202695 kubelet[2237]: E0625 16:28:56.202625 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:57.333122 containerd[1279]: time="2024-06-25T16:28:57.333052909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:57.334344 containerd[1279]: time="2024-06-25T16:28:57.334275077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:28:57.335766 containerd[1279]: time="2024-06-25T16:28:57.335715310Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:57.338227 containerd[1279]: time="2024-06-25T16:28:57.338172274Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:57.340882 containerd[1279]: time="2024-06-25T16:28:57.340821680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:28:57.342148 containerd[1279]: time="2024-06-25T16:28:57.342089089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.975865813s" Jun 25 16:28:57.342363 containerd[1279]: time="2024-06-25T16:28:57.342336352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:28:57.348807 containerd[1279]: time="2024-06-25T16:28:57.348745690Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:28:57.368886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2690482349.mount: Deactivated successfully. Jun 25 16:28:57.378690 containerd[1279]: time="2024-06-25T16:28:57.378587759Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce\"" Jun 25 16:28:57.380033 containerd[1279]: time="2024-06-25T16:28:57.379981487Z" level=info msg="StartContainer for \"389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce\"" Jun 25 16:28:57.503238 systemd[1]: Started cri-containerd-389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce.scope - libcontainer container 389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce. Jun 25 16:28:57.528000 audit: BPF prog-id=147 op=LOAD Jun 25 16:28:57.528000 audit[3173]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2861 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:57.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338396264303662613331313763353365343234626264373336313230 Jun 25 16:28:57.528000 audit: BPF prog-id=148 op=LOAD Jun 25 16:28:57.528000 audit[3173]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2861 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:57.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338396264303662613331313763353365343234626264373336313230 Jun 25 16:28:57.528000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:28:57.528000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:28:57.528000 audit: BPF prog-id=149 op=LOAD Jun 25 16:28:57.528000 audit[3173]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2861 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:28:57.528000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338396264303662613331313763353365343234626264373336313230 Jun 25 16:28:57.553402 containerd[1279]: time="2024-06-25T16:28:57.553317149Z" level=info msg="StartContainer for \"389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce\" returns successfully" Jun 25 16:28:58.069224 systemd[1]: cri-containerd-389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce.scope: Deactivated successfully. Jun 25 16:28:58.074284 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:28:58.074498 kernel: audit: type=1334 audit(1719332938.072:554): prog-id=149 op=UNLOAD Jun 25 16:28:58.072000 audit: BPF prog-id=149 op=UNLOAD Jun 25 16:28:58.111596 kubelet[2237]: I0625 16:28:58.111531 2237 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:28:58.124691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce-rootfs.mount: Deactivated successfully. Jun 25 16:28:58.132152 containerd[1279]: time="2024-06-25T16:28:58.132078496Z" level=info msg="shim disconnected" id=389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce namespace=k8s.io Jun 25 16:28:58.132446 containerd[1279]: time="2024-06-25T16:28:58.132417798Z" level=warning msg="cleaning up after shim disconnected" id=389bd06ba3117c53e424bbd736120cc842a1adb891ca987c271f8285af5fe1ce namespace=k8s.io Jun 25 16:28:58.132525 containerd[1279]: time="2024-06-25T16:28:58.132506290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:28:58.170074 kubelet[2237]: I0625 16:28:58.170039 2237 topology_manager.go:215] "Topology Admit Handler" podUID="7f99e445-217f-48d6-b1a5-92f613923722" podNamespace="kube-system" podName="coredns-5dd5756b68-jnbj2" Jun 25 16:28:58.179538 systemd[1]: Created slice kubepods-burstable-pod7f99e445_217f_48d6_b1a5_92f613923722.slice - libcontainer container kubepods-burstable-pod7f99e445_217f_48d6_b1a5_92f613923722.slice. Jun 25 16:28:58.184733 kubelet[2237]: I0625 16:28:58.184687 2237 topology_manager.go:215] "Topology Admit Handler" podUID="c683e090-e0a1-4042-b2a1-c22a1edf7207" podNamespace="kube-system" podName="coredns-5dd5756b68-lhffn" Jun 25 16:28:58.190502 containerd[1279]: time="2024-06-25T16:28:58.190435809Z" level=warning msg="cleanup warnings time=\"2024-06-25T16:28:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 16:28:58.197227 systemd[1]: Created slice kubepods-burstable-podc683e090_e0a1_4042_b2a1_c22a1edf7207.slice - libcontainer container kubepods-burstable-podc683e090_e0a1_4042_b2a1_c22a1edf7207.slice. Jun 25 16:28:58.208427 kubelet[2237]: I0625 16:28:58.208399 2237 topology_manager.go:215] "Topology Admit Handler" podUID="ea16791a-e4ba-4de5-bc62-0b62890d911f" podNamespace="calico-system" podName="calico-kube-controllers-66c968684-ncbrr" Jun 25 16:28:58.215832 systemd[1]: Created slice kubepods-besteffort-pod1a3072e8_f1d4_4a0b_a333_9167360f3eb4.slice - libcontainer container kubepods-besteffort-pod1a3072e8_f1d4_4a0b_a333_9167360f3eb4.slice. Jun 25 16:28:58.219986 containerd[1279]: time="2024-06-25T16:28:58.219481002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q78rw,Uid:1a3072e8-f1d4-4a0b-a333-9167360f3eb4,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:58.222125 systemd[1]: Created slice kubepods-besteffort-podea16791a_e4ba_4de5_bc62_0b62890d911f.slice - libcontainer container kubepods-besteffort-podea16791a_e4ba_4de5_bc62_0b62890d911f.slice. Jun 25 16:28:58.242790 kubelet[2237]: I0625 16:28:58.242737 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqg4b\" (UniqueName: \"kubernetes.io/projected/7f99e445-217f-48d6-b1a5-92f613923722-kube-api-access-qqg4b\") pod \"coredns-5dd5756b68-jnbj2\" (UID: \"7f99e445-217f-48d6-b1a5-92f613923722\") " pod="kube-system/coredns-5dd5756b68-jnbj2" Jun 25 16:28:58.243416 kubelet[2237]: I0625 16:28:58.243387 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f99e445-217f-48d6-b1a5-92f613923722-config-volume\") pod \"coredns-5dd5756b68-jnbj2\" (UID: \"7f99e445-217f-48d6-b1a5-92f613923722\") " pod="kube-system/coredns-5dd5756b68-jnbj2" Jun 25 16:28:58.345754 kubelet[2237]: I0625 16:28:58.345530 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c683e090-e0a1-4042-b2a1-c22a1edf7207-config-volume\") pod \"coredns-5dd5756b68-lhffn\" (UID: \"c683e090-e0a1-4042-b2a1-c22a1edf7207\") " pod="kube-system/coredns-5dd5756b68-lhffn" Jun 25 16:28:58.348721 kubelet[2237]: I0625 16:28:58.348673 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ffx6\" (UniqueName: \"kubernetes.io/projected/ea16791a-e4ba-4de5-bc62-0b62890d911f-kube-api-access-4ffx6\") pod \"calico-kube-controllers-66c968684-ncbrr\" (UID: \"ea16791a-e4ba-4de5-bc62-0b62890d911f\") " pod="calico-system/calico-kube-controllers-66c968684-ncbrr" Jun 25 16:28:58.348916 kubelet[2237]: I0625 16:28:58.348780 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvm2\" (UniqueName: \"kubernetes.io/projected/c683e090-e0a1-4042-b2a1-c22a1edf7207-kube-api-access-snvm2\") pod \"coredns-5dd5756b68-lhffn\" (UID: \"c683e090-e0a1-4042-b2a1-c22a1edf7207\") " pod="kube-system/coredns-5dd5756b68-lhffn" Jun 25 16:28:58.348916 kubelet[2237]: I0625 16:28:58.348809 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea16791a-e4ba-4de5-bc62-0b62890d911f-tigera-ca-bundle\") pod \"calico-kube-controllers-66c968684-ncbrr\" (UID: \"ea16791a-e4ba-4de5-bc62-0b62890d911f\") " pod="calico-system/calico-kube-controllers-66c968684-ncbrr" Jun 25 16:28:58.409249 containerd[1279]: time="2024-06-25T16:28:58.409164636Z" level=error msg="Failed to destroy network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.413395 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a-shm.mount: Deactivated successfully. Jun 25 16:28:58.414309 containerd[1279]: time="2024-06-25T16:28:58.414184175Z" level=error msg="encountered an error cleaning up failed sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.414599 containerd[1279]: time="2024-06-25T16:28:58.414532803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q78rw,Uid:1a3072e8-f1d4-4a0b-a333-9167360f3eb4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.415220 kubelet[2237]: E0625 16:28:58.415188 2237 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.415360 kubelet[2237]: E0625 16:28:58.415296 2237 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:58.415718 kubelet[2237]: E0625 16:28:58.415694 2237 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q78rw" Jun 25 16:28:58.415813 kubelet[2237]: E0625 16:28:58.415798 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q78rw_calico-system(1a3072e8-f1d4-4a0b-a333-9167360f3eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q78rw_calico-system(1a3072e8-f1d4-4a0b-a333-9167360f3eb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:58.425002 kubelet[2237]: E0625 16:28:58.423688 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:58.432675 containerd[1279]: time="2024-06-25T16:28:58.432607951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:28:58.442716 kubelet[2237]: I0625 16:28:58.442680 2237 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:28:58.443709 containerd[1279]: time="2024-06-25T16:28:58.443660364Z" level=info msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" Jun 25 16:28:58.464965 containerd[1279]: time="2024-06-25T16:28:58.462063176Z" level=info msg="Ensure that sandbox a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a in task-service has been cleanup successfully" Jun 25 16:28:58.503408 kubelet[2237]: E0625 16:28:58.503354 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:58.523301 containerd[1279]: time="2024-06-25T16:28:58.523237461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jnbj2,Uid:7f99e445-217f-48d6-b1a5-92f613923722,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:58.530626 containerd[1279]: time="2024-06-25T16:28:58.530545110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c968684-ncbrr,Uid:ea16791a-e4ba-4de5-bc62-0b62890d911f,Namespace:calico-system,Attempt:0,}" Jun 25 16:28:58.563762 containerd[1279]: time="2024-06-25T16:28:58.563646891Z" level=error msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" failed" error="failed to destroy network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.564442 kubelet[2237]: E0625 16:28:58.564406 2237 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:28:58.564596 kubelet[2237]: E0625 16:28:58.564483 2237 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a"} Jun 25 16:28:58.564596 kubelet[2237]: E0625 16:28:58.564565 2237 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:58.564756 kubelet[2237]: E0625 16:28:58.564616 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a3072e8-f1d4-4a0b-a333-9167360f3eb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q78rw" podUID="1a3072e8-f1d4-4a0b-a333-9167360f3eb4" Jun 25 16:28:58.664787 containerd[1279]: time="2024-06-25T16:28:58.664580073Z" level=error msg="Failed to destroy network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.666209 containerd[1279]: time="2024-06-25T16:28:58.666140992Z" level=error msg="encountered an error cleaning up failed sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.666398 containerd[1279]: time="2024-06-25T16:28:58.666235030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jnbj2,Uid:7f99e445-217f-48d6-b1a5-92f613923722,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.666563 kubelet[2237]: E0625 16:28:58.666529 2237 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.666711 kubelet[2237]: E0625 16:28:58.666613 2237 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jnbj2" Jun 25 16:28:58.666711 kubelet[2237]: E0625 16:28:58.666645 2237 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jnbj2" Jun 25 16:28:58.666711 kubelet[2237]: E0625 16:28:58.666709 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-jnbj2_kube-system(7f99e445-217f-48d6-b1a5-92f613923722)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-jnbj2_kube-system(7f99e445-217f-48d6-b1a5-92f613923722)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jnbj2" podUID="7f99e445-217f-48d6-b1a5-92f613923722" Jun 25 16:28:58.674906 containerd[1279]: time="2024-06-25T16:28:58.674823868Z" level=error msg="Failed to destroy network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.684463 containerd[1279]: time="2024-06-25T16:28:58.684395985Z" level=error msg="encountered an error cleaning up failed sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.684748 containerd[1279]: time="2024-06-25T16:28:58.684708019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c968684-ncbrr,Uid:ea16791a-e4ba-4de5-bc62-0b62890d911f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.685222 kubelet[2237]: E0625 16:28:58.685181 2237 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.685318 kubelet[2237]: E0625 16:28:58.685277 2237 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c968684-ncbrr" Jun 25 16:28:58.685318 kubelet[2237]: E0625 16:28:58.685316 2237 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66c968684-ncbrr" Jun 25 16:28:58.685426 kubelet[2237]: E0625 16:28:58.685410 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66c968684-ncbrr_calico-system(ea16791a-e4ba-4de5-bc62-0b62890d911f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66c968684-ncbrr_calico-system(ea16791a-e4ba-4de5-bc62-0b62890d911f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c968684-ncbrr" podUID="ea16791a-e4ba-4de5-bc62-0b62890d911f" Jun 25 16:28:58.802106 kubelet[2237]: E0625 16:28:58.801715 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:28:58.805222 containerd[1279]: time="2024-06-25T16:28:58.805146146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lhffn,Uid:c683e090-e0a1-4042-b2a1-c22a1edf7207,Namespace:kube-system,Attempt:0,}" Jun 25 16:28:58.923031 containerd[1279]: time="2024-06-25T16:28:58.922126206Z" level=error msg="Failed to destroy network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.923639 containerd[1279]: time="2024-06-25T16:28:58.923572654Z" level=error msg="encountered an error cleaning up failed sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.923810 containerd[1279]: time="2024-06-25T16:28:58.923783794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lhffn,Uid:c683e090-e0a1-4042-b2a1-c22a1edf7207,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.924330 kubelet[2237]: E0625 16:28:58.924258 2237 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:58.924464 kubelet[2237]: E0625 16:28:58.924371 2237 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lhffn" Jun 25 16:28:58.924464 kubelet[2237]: E0625 16:28:58.924396 2237 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-lhffn" Jun 25 16:28:58.926185 kubelet[2237]: E0625 16:28:58.926111 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-lhffn_kube-system(c683e090-e0a1-4042-b2a1-c22a1edf7207)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-lhffn_kube-system(c683e090-e0a1-4042-b2a1-c22a1edf7207)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lhffn" podUID="c683e090-e0a1-4042-b2a1-c22a1edf7207" Jun 25 16:28:59.366564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b-shm.mount: Deactivated successfully. Jun 25 16:28:59.445721 kubelet[2237]: I0625 16:28:59.445692 2237 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:28:59.447220 containerd[1279]: time="2024-06-25T16:28:59.447105665Z" level=info msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" Jun 25 16:28:59.448385 containerd[1279]: time="2024-06-25T16:28:59.448335737Z" level=info msg="Ensure that sandbox 879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b in task-service has been cleanup successfully" Jun 25 16:28:59.450981 kubelet[2237]: I0625 16:28:59.450554 2237 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:28:59.451218 containerd[1279]: time="2024-06-25T16:28:59.451171238Z" level=info msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" Jun 25 16:28:59.451501 containerd[1279]: time="2024-06-25T16:28:59.451457301Z" level=info msg="Ensure that sandbox aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5 in task-service has been cleanup successfully" Jun 25 16:28:59.456090 kubelet[2237]: I0625 16:28:59.455579 2237 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:28:59.456258 containerd[1279]: time="2024-06-25T16:28:59.456213542Z" level=info msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" Jun 25 16:28:59.456646 containerd[1279]: time="2024-06-25T16:28:59.456428514Z" level=info msg="Ensure that sandbox 3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c in task-service has been cleanup successfully" Jun 25 16:28:59.519876 containerd[1279]: time="2024-06-25T16:28:59.519784135Z" level=error msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" failed" error="failed to destroy network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:59.520423 kubelet[2237]: E0625 16:28:59.520164 2237 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:28:59.520756 kubelet[2237]: E0625 16:28:59.520577 2237 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c"} Jun 25 16:28:59.520756 kubelet[2237]: E0625 16:28:59.520646 2237 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c683e090-e0a1-4042-b2a1-c22a1edf7207\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:59.520756 kubelet[2237]: E0625 16:28:59.520692 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c683e090-e0a1-4042-b2a1-c22a1edf7207\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-lhffn" podUID="c683e090-e0a1-4042-b2a1-c22a1edf7207" Jun 25 16:28:59.522556 containerd[1279]: time="2024-06-25T16:28:59.522419899Z" level=error msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" failed" error="failed to destroy network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:59.523326 kubelet[2237]: E0625 16:28:59.523101 2237 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:28:59.523326 kubelet[2237]: E0625 16:28:59.523153 2237 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b"} Jun 25 16:28:59.523326 kubelet[2237]: E0625 16:28:59.523245 2237 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f99e445-217f-48d6-b1a5-92f613923722\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:59.523326 kubelet[2237]: E0625 16:28:59.523293 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f99e445-217f-48d6-b1a5-92f613923722\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jnbj2" podUID="7f99e445-217f-48d6-b1a5-92f613923722" Jun 25 16:28:59.530711 containerd[1279]: time="2024-06-25T16:28:59.530614634Z" level=error msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" failed" error="failed to destroy network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:28:59.531679 kubelet[2237]: E0625 16:28:59.531505 2237 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:28:59.531679 kubelet[2237]: E0625 16:28:59.531558 2237 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5"} Jun 25 16:28:59.531679 kubelet[2237]: E0625 16:28:59.531603 2237 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea16791a-e4ba-4de5-bc62-0b62890d911f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:28:59.531679 kubelet[2237]: E0625 16:28:59.531636 2237 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea16791a-e4ba-4de5-bc62-0b62890d911f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66c968684-ncbrr" podUID="ea16791a-e4ba-4de5-bc62-0b62890d911f" Jun 25 16:29:05.710755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202612322.mount: Deactivated successfully. Jun 25 16:29:05.782911 containerd[1279]: time="2024-06-25T16:29:05.775917402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:29:05.783583 containerd[1279]: time="2024-06-25T16:29:05.772443473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:05.785296 containerd[1279]: time="2024-06-25T16:29:05.785251177Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:05.786370 containerd[1279]: time="2024-06-25T16:29:05.786293468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.353377759s" Jun 25 16:29:05.786370 containerd[1279]: time="2024-06-25T16:29:05.786350111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:29:05.786721 containerd[1279]: time="2024-06-25T16:29:05.786690114Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:05.787683 containerd[1279]: time="2024-06-25T16:29:05.787584180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:05.813583 containerd[1279]: time="2024-06-25T16:29:05.813411947Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:29:05.835908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399815036.mount: Deactivated successfully. Jun 25 16:29:05.857527 containerd[1279]: time="2024-06-25T16:29:05.857421992Z" level=info msg="CreateContainer within sandbox \"81ce797d73fea6e161c8d137504eeee0c9e87e1819c6d9eb5a21fa3400e8e515\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300\"" Jun 25 16:29:05.858216 containerd[1279]: time="2024-06-25T16:29:05.858173249Z" level=info msg="StartContainer for \"08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300\"" Jun 25 16:29:05.907270 systemd[1]: Started cri-containerd-08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300.scope - libcontainer container 08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300. Jun 25 16:29:05.943996 kernel: audit: type=1334 audit(1719332945.936:555): prog-id=150 op=LOAD Jun 25 16:29:05.944197 kernel: audit: type=1300 audit(1719332945.936:555): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.936000 audit: BPF prog-id=150 op=LOAD Jun 25 16:29:05.936000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038653365646464386432336561333464643664643133663564623463 Jun 25 16:29:05.950018 kernel: audit: type=1327 audit(1719332945.936:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038653365646464386432336561333464643664643133663564623463 Jun 25 16:29:05.958341 kernel: audit: type=1334 audit(1719332945.936:556): prog-id=151 op=LOAD Jun 25 16:29:05.936000 audit: BPF prog-id=151 op=LOAD Jun 25 16:29:05.936000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.965057 kernel: audit: type=1300 audit(1719332945.936:556): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038653365646464386432336561333464643664643133663564623463 Jun 25 16:29:05.971025 kernel: audit: type=1327 audit(1719332945.936:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038653365646464386432336561333464643664643133663564623463 Jun 25 16:29:05.936000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:29:05.974145 kernel: audit: type=1334 audit(1719332945.936:557): prog-id=151 op=UNLOAD Jun 25 16:29:05.937000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:29:05.985003 kernel: audit: type=1334 audit(1719332945.937:558): prog-id=150 op=UNLOAD Jun 25 16:29:05.988281 kernel: audit: type=1334 audit(1719332945.937:559): prog-id=152 op=LOAD Jun 25 16:29:05.937000 audit: BPF prog-id=152 op=LOAD Jun 25 16:29:05.989316 containerd[1279]: time="2024-06-25T16:29:05.989232990Z" level=info msg="StartContainer for \"08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300\" returns successfully" Jun 25 16:29:05.997248 kernel: audit: type=1300 audit(1719332945.937:559): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.937000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2861 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:05.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038653365646464386432336561333464643664643133663564623463 Jun 25 16:29:06.127667 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:29:06.127859 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:29:06.474545 kubelet[2237]: E0625 16:29:06.474507 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:07.476129 kubelet[2237]: I0625 16:29:07.476093 2237 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:29:07.477438 kubelet[2237]: E0625 16:29:07.477411 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:07.842000 audit[3556]: AVC avc: denied { write } for pid=3556 comm="tee" name="fd" dev="proc" ino=25790 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.842000 audit[3556]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0451da05 a2=241 a3=1b6 items=1 ppid=3514 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.842000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:29:07.842000 audit: PATH item=0 name="/dev/fd/63" inode=25772 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.847000 audit[3564]: AVC avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=25402 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.849000 audit[3569]: AVC avc: denied { write } for pid=3569 comm="tee" name="fd" dev="proc" ino=25794 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.847000 audit[3564]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecb8d4a06 a2=241 a3=1b6 items=1 ppid=3517 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.847000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:29:07.847000 audit: PATH item=0 name="/dev/fd/63" inode=25398 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.857000 audit[3566]: AVC avc: denied { write } for pid=3566 comm="tee" name="fd" dev="proc" ino=25799 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.857000 audit[3566]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbdc95a07 a2=241 a3=1b6 items=1 ppid=3511 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.857000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:29:07.857000 audit: PATH item=0 name="/dev/fd/63" inode=25399 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.864000 audit[3576]: AVC avc: denied { write } for pid=3576 comm="tee" name="fd" dev="proc" ino=25803 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.864000 audit[3576]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc88f40a05 a2=241 a3=1b6 items=1 ppid=3525 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.864000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:29:07.864000 audit: PATH item=0 name="/dev/fd/63" inode=25796 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.864000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.849000 audit[3569]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc26cf3a05 a2=241 a3=1b6 items=1 ppid=3522 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.849000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:29:07.849000 audit: PATH item=0 name="/dev/fd/63" inode=25783 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.876000 audit[3579]: AVC avc: denied { write } for pid=3579 comm="tee" name="fd" dev="proc" ino=25409 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.876000 audit[3579]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc51fed9f6 a2=241 a3=1b6 items=1 ppid=3520 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.876000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:29:07.876000 audit: PATH item=0 name="/dev/fd/63" inode=25406 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:07.880000 audit[3584]: AVC avc: denied { write } for pid=3584 comm="tee" name="fd" dev="proc" ino=25809 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:29:07.880000 audit[3584]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc9239c9f5 a2=241 a3=1b6 items=1 ppid=3536 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:07.880000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:29:07.880000 audit: PATH item=0 name="/dev/fd/63" inode=25411 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:29:07.880000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:29:10.205091 containerd[1279]: time="2024-06-25T16:29:10.204466822Z" level=info msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" Jun 25 16:29:10.308473 kubelet[2237]: I0625 16:29:10.307959 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-v88d5" podStartSLOduration=6.885536388 podCreationTimestamp="2024-06-25 16:28:50 +0000 UTC" firstStartedPulling="2024-06-25 16:28:52.36589044 +0000 UTC m=+28.407202056" lastFinishedPulling="2024-06-25 16:29:05.788223136 +0000 UTC m=+41.829534756" observedRunningTime="2024-06-25 16:29:06.497544785 +0000 UTC m=+42.538856426" watchObservedRunningTime="2024-06-25 16:29:10.307869088 +0000 UTC m=+46.349180734" Jun 25 16:29:10.374632 systemd[1]: Started sshd@7-164.92.91.188:22-139.178.89.65:45826.service - OpenSSH per-connection server daemon (139.178.89.65:45826). Jun 25 16:29:10.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.92.91.188:22-139.178.89.65:45826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:10.492000 audit[3663]: USER_ACCT pid=3663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.494000 audit[3663]: CRED_ACQ pid=3663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.494000 audit[3663]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffebababc80 a2=3 a3=7f3d26e97480 items=0 ppid=1 pid=3663 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:10.494000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:10.496504 sshd[3663]: Accepted publickey for core from 139.178.89.65 port 45826 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:10.497295 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:10.507039 systemd-logind[1272]: New session 8 of user core. Jun 25 16:29:10.512411 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:29:10.519000 audit[3663]: USER_START pid=3663 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.523000 audit[3667]: CRED_ACQ pid=3667 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.306 [INFO][3650] k8s.go 608: Cleaning up netns ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.306 [INFO][3650] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" iface="eth0" netns="/var/run/netns/cni-d4bfde39-af65-cd8d-c3d0-41bcb5b9b7c1" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.307 [INFO][3650] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" iface="eth0" netns="/var/run/netns/cni-d4bfde39-af65-cd8d-c3d0-41bcb5b9b7c1" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.307 [INFO][3650] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" iface="eth0" netns="/var/run/netns/cni-d4bfde39-af65-cd8d-c3d0-41bcb5b9b7c1" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.307 [INFO][3650] k8s.go 615: Releasing IP address(es) ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.307 [INFO][3650] utils.go 188: Calico CNI releasing IP address ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.480 [INFO][3658] ipam_plugin.go 411: Releasing address using handleID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.481 [INFO][3658] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.482 [INFO][3658] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.504 [WARNING][3658] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.505 [INFO][3658] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.519 [INFO][3658] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:10.527796 containerd[1279]: 2024-06-25 16:29:10.524 [INFO][3650] k8s.go 621: Teardown processing complete. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:10.532481 systemd[1]: run-netns-cni\x2dd4bfde39\x2daf65\x2dcd8d\x2dc3d0\x2d41bcb5b9b7c1.mount: Deactivated successfully. Jun 25 16:29:10.532957 containerd[1279]: time="2024-06-25T16:29:10.532875670Z" level=info msg="TearDown network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" successfully" Jun 25 16:29:10.533148 containerd[1279]: time="2024-06-25T16:29:10.533121982Z" level=info msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" returns successfully" Jun 25 16:29:10.535569 containerd[1279]: time="2024-06-25T16:29:10.535516249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q78rw,Uid:1a3072e8-f1d4-4a0b-a333-9167360f3eb4,Namespace:calico-system,Attempt:1,}" Jun 25 16:29:10.829434 sshd[3663]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:10.832000 audit[3663]: USER_END pid=3663 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.832000 audit[3663]: CRED_DISP pid=3663 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:10.836156 systemd[1]: sshd@7-164.92.91.188:22-139.178.89.65:45826.service: Deactivated successfully. Jun 25 16:29:10.837179 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:29:10.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-164.92.91.188:22-139.178.89.65:45826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:10.838412 systemd-logind[1272]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:29:10.840645 systemd-logind[1272]: Removed session 8. Jun 25 16:29:10.911688 systemd-networkd[1093]: cali09c0dc3e679: Link UP Jun 25 16:29:10.915527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:10.915702 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali09c0dc3e679: link becomes ready Jun 25 16:29:10.917786 systemd-networkd[1093]: cali09c0dc3e679: Gained carrier Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.682 [INFO][3672] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.737 [INFO][3672] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0 csi-node-driver- calico-system 1a3072e8-f1d4-4a0b-a333-9167360f3eb4 855 0 2024-06-25 16:28:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-1561673ea7 csi-node-driver-q78rw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali09c0dc3e679 [] []}} ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.737 [INFO][3672] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.803 [INFO][3686] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" HandleID="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.826 [INFO][3686] ipam_plugin.go 264: Auto assigning IP ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" HandleID="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5d80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-1561673ea7", "pod":"csi-node-driver-q78rw", "timestamp":"2024-06-25 16:29:10.803220802 +0000 UTC"}, Hostname:"ci-3815.2.4-a-1561673ea7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.826 [INFO][3686] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.826 [INFO][3686] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.826 [INFO][3686] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-1561673ea7' Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.832 [INFO][3686] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.848 [INFO][3686] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.857 [INFO][3686] ipam.go 489: Trying affinity for 192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.861 [INFO][3686] ipam.go 155: Attempting to load block cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.865 [INFO][3686] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.865 [INFO][3686] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.0/26 handle="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.868 [INFO][3686] ipam.go 1685: Creating new handle: k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261 Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.874 [INFO][3686] ipam.go 1203: Writing block in order to claim IPs block=192.168.94.0/26 handle="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.883 [INFO][3686] ipam.go 1216: Successfully claimed IPs: [192.168.94.1/26] block=192.168.94.0/26 handle="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.883 [INFO][3686] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.1/26] handle="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.883 [INFO][3686] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:10.942037 containerd[1279]: 2024-06-25 16:29:10.883 [INFO][3686] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.94.1/26] IPv6=[] ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" HandleID="k8s-pod-network.58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.887 [INFO][3672] k8s.go 386: Populated endpoint ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a3072e8-f1d4-4a0b-a333-9167360f3eb4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"", Pod:"csi-node-driver-q78rw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09c0dc3e679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.887 [INFO][3672] k8s.go 387: Calico CNI using IPs: [192.168.94.1/32] ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.887 [INFO][3672] dataplane_linux.go 68: Setting the host side veth name to cali09c0dc3e679 ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.919 [INFO][3672] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.920 [INFO][3672] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a3072e8-f1d4-4a0b-a333-9167360f3eb4", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261", Pod:"csi-node-driver-q78rw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09c0dc3e679", MAC:"22:67:ed:77:a6:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:10.943252 containerd[1279]: 2024-06-25 16:29:10.935 [INFO][3672] k8s.go 500: Wrote updated endpoint to datastore ContainerID="58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261" Namespace="calico-system" Pod="csi-node-driver-q78rw" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:11.004219 containerd[1279]: time="2024-06-25T16:29:11.004060302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:11.004219 containerd[1279]: time="2024-06-25T16:29:11.004158200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:11.004598 containerd[1279]: time="2024-06-25T16:29:11.004189455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:11.004598 containerd[1279]: time="2024-06-25T16:29:11.004573453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:11.034325 systemd[1]: Started cri-containerd-58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261.scope - libcontainer container 58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261. Jun 25 16:29:11.052000 audit: BPF prog-id=153 op=LOAD Jun 25 16:29:11.054374 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 16:29:11.054498 kernel: audit: type=1334 audit(1719332951.052:576): prog-id=153 op=LOAD Jun 25 16:29:11.056000 audit: BPF prog-id=154 op=LOAD Jun 25 16:29:11.056000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3716 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.061907 kernel: audit: type=1334 audit(1719332951.056:577): prog-id=154 op=LOAD Jun 25 16:29:11.062076 kernel: audit: type=1300 audit(1719332951.056:577): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3716 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313031303932313234653138373932316666623833306531356365 Jun 25 16:29:11.073114 kernel: audit: type=1327 audit(1719332951.056:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313031303932313234653138373932316666623833306531356365 Jun 25 16:29:11.073326 kernel: audit: type=1334 audit(1719332951.056:578): prog-id=155 op=LOAD Jun 25 16:29:11.073381 kernel: audit: type=1300 audit(1719332951.056:578): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3716 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.056000 audit: BPF prog-id=155 op=LOAD Jun 25 16:29:11.056000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3716 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313031303932313234653138373932316666623833306531356365 Jun 25 16:29:11.082113 kernel: audit: type=1327 audit(1719332951.056:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313031303932313234653138373932316666623833306531356365 Jun 25 16:29:11.089682 kernel: audit: type=1334 audit(1719332951.057:579): prog-id=155 op=UNLOAD Jun 25 16:29:11.057000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:29:11.092501 kernel: audit: type=1334 audit(1719332951.057:580): prog-id=154 op=UNLOAD Jun 25 16:29:11.057000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:29:11.057000 audit: BPF prog-id=156 op=LOAD Jun 25 16:29:11.095270 kernel: audit: type=1334 audit(1719332951.057:581): prog-id=156 op=LOAD Jun 25 16:29:11.057000 audit[3728]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3716 pid=3728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538313031303932313234653138373932316666623833306531356365 Jun 25 16:29:11.098702 containerd[1279]: time="2024-06-25T16:29:11.098609597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q78rw,Uid:1a3072e8-f1d4-4a0b-a333-9167360f3eb4,Namespace:calico-system,Attempt:1,} returns sandbox id \"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261\"" Jun 25 16:29:11.104762 containerd[1279]: time="2024-06-25T16:29:11.103813883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:29:11.203706 containerd[1279]: time="2024-06-25T16:29:11.203639355Z" level=info msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.286 [INFO][3772] k8s.go 608: Cleaning up netns ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.287 [INFO][3772] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" iface="eth0" netns="/var/run/netns/cni-7ce0a551-33b0-2be2-71b5-620e0776a431" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.287 [INFO][3772] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" iface="eth0" netns="/var/run/netns/cni-7ce0a551-33b0-2be2-71b5-620e0776a431" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.287 [INFO][3772] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" iface="eth0" netns="/var/run/netns/cni-7ce0a551-33b0-2be2-71b5-620e0776a431" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.287 [INFO][3772] k8s.go 615: Releasing IP address(es) ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.287 [INFO][3772] utils.go 188: Calico CNI releasing IP address ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.315 [INFO][3778] ipam_plugin.go 411: Releasing address using handleID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.315 [INFO][3778] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.316 [INFO][3778] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.324 [WARNING][3778] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.325 [INFO][3778] ipam_plugin.go 439: Releasing address using workloadID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.327 [INFO][3778] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:11.333361 containerd[1279]: 2024-06-25 16:29:11.330 [INFO][3772] k8s.go 621: Teardown processing complete. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:11.334574 containerd[1279]: time="2024-06-25T16:29:11.333638814Z" level=info msg="TearDown network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" successfully" Jun 25 16:29:11.334574 containerd[1279]: time="2024-06-25T16:29:11.333705946Z" level=info msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" returns successfully" Jun 25 16:29:11.334904 containerd[1279]: time="2024-06-25T16:29:11.334848205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c968684-ncbrr,Uid:ea16791a-e4ba-4de5-bc62-0b62890d911f,Namespace:calico-system,Attempt:1,}" Jun 25 16:29:11.540068 systemd[1]: run-netns-cni\x2d7ce0a551\x2d33b0\x2d2be2\x2d71b5\x2d620e0776a431.mount: Deactivated successfully. Jun 25 16:29:11.597081 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califc395c205f8: link becomes ready Jun 25 16:29:11.597485 systemd-networkd[1093]: califc395c205f8: Link UP Jun 25 16:29:11.597836 systemd-networkd[1093]: califc395c205f8: Gained carrier Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.402 [INFO][3786] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.421 [INFO][3786] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0 calico-kube-controllers-66c968684- calico-system ea16791a-e4ba-4de5-bc62-0b62890d911f 865 0 2024-06-25 16:28:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66c968684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-1561673ea7 calico-kube-controllers-66c968684-ncbrr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califc395c205f8 [] []}} ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.421 [INFO][3786] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.499 [INFO][3804] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" HandleID="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.514 [INFO][3804] ipam_plugin.go 264: Auto assigning IP ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" HandleID="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00029a350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-1561673ea7", "pod":"calico-kube-controllers-66c968684-ncbrr", "timestamp":"2024-06-25 16:29:11.499176528 +0000 UTC"}, Hostname:"ci-3815.2.4-a-1561673ea7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.514 [INFO][3804] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.514 [INFO][3804] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.514 [INFO][3804] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-1561673ea7' Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.520 [INFO][3804] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.528 [INFO][3804] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.556 [INFO][3804] ipam.go 489: Trying affinity for 192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.561 [INFO][3804] ipam.go 155: Attempting to load block cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.568 [INFO][3804] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.568 [INFO][3804] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.0/26 handle="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.572 [INFO][3804] ipam.go 1685: Creating new handle: k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802 Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.579 [INFO][3804] ipam.go 1203: Writing block in order to claim IPs block=192.168.94.0/26 handle="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.588 [INFO][3804] ipam.go 1216: Successfully claimed IPs: [192.168.94.2/26] block=192.168.94.0/26 handle="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.588 [INFO][3804] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.2/26] handle="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.588 [INFO][3804] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:11.625846 containerd[1279]: 2024-06-25 16:29:11.588 [INFO][3804] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.94.2/26] IPv6=[] ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" HandleID="k8s-pod-network.ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.590 [INFO][3786] k8s.go 386: Populated endpoint ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0", GenerateName:"calico-kube-controllers-66c968684-", Namespace:"calico-system", SelfLink:"", UID:"ea16791a-e4ba-4de5-bc62-0b62890d911f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c968684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"", Pod:"calico-kube-controllers-66c968684-ncbrr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc395c205f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.591 [INFO][3786] k8s.go 387: Calico CNI using IPs: [192.168.94.2/32] ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.591 [INFO][3786] dataplane_linux.go 68: Setting the host side veth name to califc395c205f8 ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.597 [INFO][3786] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.600 [INFO][3786] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0", GenerateName:"calico-kube-controllers-66c968684-", Namespace:"calico-system", SelfLink:"", UID:"ea16791a-e4ba-4de5-bc62-0b62890d911f", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c968684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802", Pod:"calico-kube-controllers-66c968684-ncbrr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc395c205f8", MAC:"e6:8b:e7:39:10:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:11.627196 containerd[1279]: 2024-06-25 16:29:11.613 [INFO][3786] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802" Namespace="calico-system" Pod="calico-kube-controllers-66c968684-ncbrr" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:11.663593 containerd[1279]: time="2024-06-25T16:29:11.663453404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:11.663593 containerd[1279]: time="2024-06-25T16:29:11.663526123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:11.663593 containerd[1279]: time="2024-06-25T16:29:11.663557233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:11.664309 containerd[1279]: time="2024-06-25T16:29:11.664228759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:11.712571 systemd[1]: run-containerd-runc-k8s.io-ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802-runc.p3ZYlj.mount: Deactivated successfully. Jun 25 16:29:11.724887 systemd[1]: Started cri-containerd-ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802.scope - libcontainer container ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802. Jun 25 16:29:11.741000 audit: BPF prog-id=157 op=LOAD Jun 25 16:29:11.742000 audit: BPF prog-id=158 op=LOAD Jun 25 16:29:11.742000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3834 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261363937393336666464666134653561383137376136643366616233 Jun 25 16:29:11.742000 audit: BPF prog-id=159 op=LOAD Jun 25 16:29:11.742000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3834 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261363937393336666464666134653561383137376136643366616233 Jun 25 16:29:11.742000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:29:11.742000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:29:11.743000 audit: BPF prog-id=160 op=LOAD Jun 25 16:29:11.743000 audit[3844]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3834 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:11.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261363937393336666464666134653561383137376136643366616233 Jun 25 16:29:11.793443 containerd[1279]: time="2024-06-25T16:29:11.793370898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66c968684-ncbrr,Uid:ea16791a-e4ba-4de5-bc62-0b62890d911f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802\"" Jun 25 16:29:11.959194 systemd-networkd[1093]: cali09c0dc3e679: Gained IPv6LL Jun 25 16:29:12.598191 containerd[1279]: time="2024-06-25T16:29:12.598119458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:12.598842 containerd[1279]: time="2024-06-25T16:29:12.598752338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:29:12.600752 containerd[1279]: time="2024-06-25T16:29:12.600699657Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:12.602871 containerd[1279]: time="2024-06-25T16:29:12.602795158Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:12.605218 containerd[1279]: time="2024-06-25T16:29:12.605165186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:12.605986 containerd[1279]: time="2024-06-25T16:29:12.605925090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.502051007s" Jun 25 16:29:12.606111 containerd[1279]: time="2024-06-25T16:29:12.605987692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:29:12.607392 containerd[1279]: time="2024-06-25T16:29:12.606776737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:29:12.608537 containerd[1279]: time="2024-06-25T16:29:12.608500192Z" level=info msg="CreateContainer within sandbox \"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:29:12.658837 containerd[1279]: time="2024-06-25T16:29:12.658761798Z" level=info msg="CreateContainer within sandbox \"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930\"" Jun 25 16:29:12.660171 containerd[1279]: time="2024-06-25T16:29:12.660070583Z" level=info msg="StartContainer for \"b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930\"" Jun 25 16:29:12.713716 systemd[1]: run-containerd-runc-k8s.io-b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930-runc.otVoOy.mount: Deactivated successfully. Jun 25 16:29:12.726359 systemd[1]: Started cri-containerd-b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930.scope - libcontainer container b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930. Jun 25 16:29:12.752000 audit: BPF prog-id=161 op=LOAD Jun 25 16:29:12.752000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3716 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:12.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230643732666462616535656562653465376332303661306232623931 Jun 25 16:29:12.752000 audit: BPF prog-id=162 op=LOAD Jun 25 16:29:12.752000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3716 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:12.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230643732666462616535656562653465376332303661306232623931 Jun 25 16:29:12.752000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:29:12.752000 audit: BPF prog-id=161 op=UNLOAD Jun 25 16:29:12.752000 audit: BPF prog-id=163 op=LOAD Jun 25 16:29:12.752000 audit[3899]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3716 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:12.752000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230643732666462616535656562653465376332303661306232623931 Jun 25 16:29:12.783514 containerd[1279]: time="2024-06-25T16:29:12.783436526Z" level=info msg="StartContainer for \"b0d72fdbae5eebe4e7c206a0b2b915b7e5426bbfb82f52beebd59e2dcda0d930\" returns successfully" Jun 25 16:29:12.983191 systemd-networkd[1093]: califc395c205f8: Gained IPv6LL Jun 25 16:29:13.203369 containerd[1279]: time="2024-06-25T16:29:13.203308081Z" level=info msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.293 [INFO][3940] k8s.go 608: Cleaning up netns ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.294 [INFO][3940] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" iface="eth0" netns="/var/run/netns/cni-7b8ff670-787c-be60-ee5a-dbcf67c21a52" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.294 [INFO][3940] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" iface="eth0" netns="/var/run/netns/cni-7b8ff670-787c-be60-ee5a-dbcf67c21a52" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.294 [INFO][3940] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" iface="eth0" netns="/var/run/netns/cni-7b8ff670-787c-be60-ee5a-dbcf67c21a52" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.295 [INFO][3940] k8s.go 615: Releasing IP address(es) ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.295 [INFO][3940] utils.go 188: Calico CNI releasing IP address ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.323 [INFO][3947] ipam_plugin.go 411: Releasing address using handleID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.323 [INFO][3947] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.323 [INFO][3947] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.331 [WARNING][3947] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.331 [INFO][3947] ipam_plugin.go 439: Releasing address using workloadID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.334 [INFO][3947] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:13.339460 containerd[1279]: 2024-06-25 16:29:13.336 [INFO][3940] k8s.go 621: Teardown processing complete. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:13.340797 containerd[1279]: time="2024-06-25T16:29:13.340731868Z" level=info msg="TearDown network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" successfully" Jun 25 16:29:13.340923 containerd[1279]: time="2024-06-25T16:29:13.340902033Z" level=info msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" returns successfully" Jun 25 16:29:13.346735 kubelet[2237]: E0625 16:29:13.344395 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:13.344669 systemd[1]: run-netns-cni\x2d7b8ff670\x2d787c\x2dbe60\x2dee5a\x2ddbcf67c21a52.mount: Deactivated successfully. Jun 25 16:29:13.348491 containerd[1279]: time="2024-06-25T16:29:13.348437731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jnbj2,Uid:7f99e445-217f-48d6-b1a5-92f613923722,Namespace:kube-system,Attempt:1,}" Jun 25 16:29:13.576975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:13.577107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9d159a9d105: link becomes ready Jun 25 16:29:13.574143 systemd-networkd[1093]: cali9d159a9d105: Link UP Jun 25 16:29:13.576359 systemd-networkd[1093]: cali9d159a9d105: Gained carrier Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.417 [INFO][3953] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.440 [INFO][3953] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0 coredns-5dd5756b68- kube-system 7f99e445-217f-48d6-b1a5-92f613923722 887 0 2024-06-25 16:28:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-1561673ea7 coredns-5dd5756b68-jnbj2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d159a9d105 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.440 [INFO][3953] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.488 [INFO][3970] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" HandleID="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.504 [INFO][3970] ipam_plugin.go 264: Auto assigning IP ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" HandleID="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-1561673ea7", "pod":"coredns-5dd5756b68-jnbj2", "timestamp":"2024-06-25 16:29:13.487971345 +0000 UTC"}, Hostname:"ci-3815.2.4-a-1561673ea7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.504 [INFO][3970] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.504 [INFO][3970] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.504 [INFO][3970] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-1561673ea7' Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.508 [INFO][3970] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.525 [INFO][3970] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.541 [INFO][3970] ipam.go 489: Trying affinity for 192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.544 [INFO][3970] ipam.go 155: Attempting to load block cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.550 [INFO][3970] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.550 [INFO][3970] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.0/26 handle="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.556 [INFO][3970] ipam.go 1685: Creating new handle: k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4 Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.562 [INFO][3970] ipam.go 1203: Writing block in order to claim IPs block=192.168.94.0/26 handle="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.569 [INFO][3970] ipam.go 1216: Successfully claimed IPs: [192.168.94.3/26] block=192.168.94.0/26 handle="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.569 [INFO][3970] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.3/26] handle="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.569 [INFO][3970] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:13.592575 containerd[1279]: 2024-06-25 16:29:13.569 [INFO][3970] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.94.3/26] IPv6=[] ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" HandleID="k8s-pod-network.961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.571 [INFO][3953] k8s.go 386: Populated endpoint ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7f99e445-217f-48d6-b1a5-92f613923722", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"", Pod:"coredns-5dd5756b68-jnbj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d159a9d105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.571 [INFO][3953] k8s.go 387: Calico CNI using IPs: [192.168.94.3/32] ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.571 [INFO][3953] dataplane_linux.go 68: Setting the host side veth name to cali9d159a9d105 ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.576 [INFO][3953] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.577 [INFO][3953] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7f99e445-217f-48d6-b1a5-92f613923722", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4", Pod:"coredns-5dd5756b68-jnbj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d159a9d105", MAC:"5e:20:31:81:c2:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:13.593576 containerd[1279]: 2024-06-25 16:29:13.589 [INFO][3953] k8s.go 500: Wrote updated endpoint to datastore ContainerID="961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4" Namespace="kube-system" Pod="coredns-5dd5756b68-jnbj2" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:13.647655 containerd[1279]: time="2024-06-25T16:29:13.647484160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:13.648121 containerd[1279]: time="2024-06-25T16:29:13.647624499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:13.648121 containerd[1279]: time="2024-06-25T16:29:13.647649627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:13.648121 containerd[1279]: time="2024-06-25T16:29:13.647695491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:13.680242 systemd[1]: Started cri-containerd-961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4.scope - libcontainer container 961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4. Jun 25 16:29:13.696000 audit: BPF prog-id=164 op=LOAD Jun 25 16:29:13.696000 audit: BPF prog-id=165 op=LOAD Jun 25 16:29:13.696000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316465323563343963303237643330313733363061376165653962 Jun 25 16:29:13.696000 audit: BPF prog-id=166 op=LOAD Jun 25 16:29:13.696000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316465323563343963303237643330313733363061376165653962 Jun 25 16:29:13.696000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:29:13.697000 audit: BPF prog-id=165 op=UNLOAD Jun 25 16:29:13.697000 audit: BPF prog-id=167 op=LOAD Jun 25 16:29:13.697000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316465323563343963303237643330313733363061376165653962 Jun 25 16:29:13.744401 containerd[1279]: time="2024-06-25T16:29:13.744325148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jnbj2,Uid:7f99e445-217f-48d6-b1a5-92f613923722,Namespace:kube-system,Attempt:1,} returns sandbox id \"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4\"" Jun 25 16:29:13.749868 kubelet[2237]: E0625 16:29:13.746534 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:13.760560 containerd[1279]: time="2024-06-25T16:29:13.760497471Z" level=info msg="CreateContainer within sandbox \"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:29:13.859870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069050537.mount: Deactivated successfully. Jun 25 16:29:13.867323 containerd[1279]: time="2024-06-25T16:29:13.867272453Z" level=info msg="CreateContainer within sandbox \"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2be8e99a7c915f0bf7349d32241a439c1c0f0b98c1cfc32d69f631609bb5baf\"" Jun 25 16:29:13.870296 containerd[1279]: time="2024-06-25T16:29:13.870243753Z" level=info msg="StartContainer for \"d2be8e99a7c915f0bf7349d32241a439c1c0f0b98c1cfc32d69f631609bb5baf\"" Jun 25 16:29:13.947214 systemd[1]: Started cri-containerd-d2be8e99a7c915f0bf7349d32241a439c1c0f0b98c1cfc32d69f631609bb5baf.scope - libcontainer container d2be8e99a7c915f0bf7349d32241a439c1c0f0b98c1cfc32d69f631609bb5baf. Jun 25 16:29:13.963000 audit: BPF prog-id=168 op=LOAD Jun 25 16:29:13.963000 audit: BPF prog-id=169 op=LOAD Jun 25 16:29:13.963000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3996 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.963000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432626538653939613763393135663062663733343964333232343161 Jun 25 16:29:13.964000 audit: BPF prog-id=170 op=LOAD Jun 25 16:29:13.964000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3996 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432626538653939613763393135663062663733343964333232343161 Jun 25 16:29:13.965000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:29:13.965000 audit: BPF prog-id=169 op=UNLOAD Jun 25 16:29:13.965000 audit: BPF prog-id=171 op=LOAD Jun 25 16:29:13.965000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3996 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:13.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432626538653939613763393135663062663733343964333232343161 Jun 25 16:29:13.996589 containerd[1279]: time="2024-06-25T16:29:13.996105040Z" level=info msg="StartContainer for \"d2be8e99a7c915f0bf7349d32241a439c1c0f0b98c1cfc32d69f631609bb5baf\" returns successfully" Jun 25 16:29:14.205287 containerd[1279]: time="2024-06-25T16:29:14.205142042Z" level=info msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" Jun 25 16:29:14.524722 kubelet[2237]: E0625 16:29:14.524581 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.375 [INFO][4101] k8s.go 608: Cleaning up netns ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.375 [INFO][4101] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" iface="eth0" netns="/var/run/netns/cni-aa6590d9-f964-7cbd-28dd-ef299ae6f8ec" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.375 [INFO][4101] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" iface="eth0" netns="/var/run/netns/cni-aa6590d9-f964-7cbd-28dd-ef299ae6f8ec" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.376 [INFO][4101] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" iface="eth0" netns="/var/run/netns/cni-aa6590d9-f964-7cbd-28dd-ef299ae6f8ec" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.376 [INFO][4101] k8s.go 615: Releasing IP address(es) ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.376 [INFO][4101] utils.go 188: Calico CNI releasing IP address ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.474 [INFO][4108] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.480 [INFO][4108] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.481 [INFO][4108] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.495 [WARNING][4108] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.495 [INFO][4108] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.531 [INFO][4108] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:14.556598 containerd[1279]: 2024-06-25 16:29:14.545 [INFO][4101] k8s.go 621: Teardown processing complete. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:14.561689 containerd[1279]: time="2024-06-25T16:29:14.561619959Z" level=info msg="TearDown network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" successfully" Jun 25 16:29:14.563580 containerd[1279]: time="2024-06-25T16:29:14.561993539Z" level=info msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" returns successfully" Jun 25 16:29:14.591985 kubelet[2237]: E0625 16:29:14.591773 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:14.592725 containerd[1279]: time="2024-06-25T16:29:14.592679652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lhffn,Uid:c683e090-e0a1-4042-b2a1-c22a1edf7207,Namespace:kube-system,Attempt:1,}" Jun 25 16:29:14.627146 systemd[1]: run-netns-cni\x2daa6590d9\x2df964\x2d7cbd\x2d28dd\x2def299ae6f8ec.mount: Deactivated successfully. Jun 25 16:29:14.647612 kubelet[2237]: I0625 16:29:14.647483 2237 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:29:14.650972 kubelet[2237]: E0625 16:29:14.650677 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:14.688726 kubelet[2237]: I0625 16:29:14.687335 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jnbj2" podStartSLOduration=36.687272673 podCreationTimestamp="2024-06-25 16:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:29:14.560303676 +0000 UTC m=+50.601615325" watchObservedRunningTime="2024-06-25 16:29:14.687272673 +0000 UTC m=+50.728584326" Jun 25 16:29:14.740000 audit[4120]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.740000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff50fed710 a2=0 a3=7fff50fed6fc items=0 ppid=2413 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.744000 audit[4120]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.744000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff50fed710 a2=0 a3=0 items=0 ppid=2413 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.822000 audit[4135]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.822000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd147c8110 a2=0 a3=7ffd147c80fc items=0 ppid=2413 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:14.824000 audit[4135]: NETFILTER_CFG table=nat:104 family=2 entries=33 op=nft_register_chain pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:14.824000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffd147c8110 a2=0 a3=7ffd147c80fc items=0 ppid=2413 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:14.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:15.216412 systemd-networkd[1093]: cali292ce8e59f1: Link UP Jun 25 16:29:15.223823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:29:15.224027 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali292ce8e59f1: link becomes ready Jun 25 16:29:15.223413 systemd-networkd[1093]: cali292ce8e59f1: Gained carrier Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:14.804 [INFO][4121] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:14.881 [INFO][4121] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0 coredns-5dd5756b68- kube-system c683e090-e0a1-4042-b2a1-c22a1edf7207 903 0 2024-06-25 16:28:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-1561673ea7 coredns-5dd5756b68-lhffn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali292ce8e59f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:14.881 [INFO][4121] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:14.984 [INFO][4140] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" HandleID="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.050 [INFO][4140] ipam_plugin.go 264: Auto assigning IP ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" HandleID="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003120b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-1561673ea7", "pod":"coredns-5dd5756b68-lhffn", "timestamp":"2024-06-25 16:29:14.984649566 +0000 UTC"}, Hostname:"ci-3815.2.4-a-1561673ea7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.050 [INFO][4140] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.051 [INFO][4140] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.051 [INFO][4140] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-1561673ea7' Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.079 [INFO][4140] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.108 [INFO][4140] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.139 [INFO][4140] ipam.go 489: Trying affinity for 192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.145 [INFO][4140] ipam.go 155: Attempting to load block cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.152 [INFO][4140] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.152 [INFO][4140] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.0/26 handle="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.160 [INFO][4140] ipam.go 1685: Creating new handle: k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498 Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.168 [INFO][4140] ipam.go 1203: Writing block in order to claim IPs block=192.168.94.0/26 handle="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.186 [INFO][4140] ipam.go 1216: Successfully claimed IPs: [192.168.94.4/26] block=192.168.94.0/26 handle="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.186 [INFO][4140] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.4/26] handle="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.186 [INFO][4140] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:15.255987 containerd[1279]: 2024-06-25 16:29:15.186 [INFO][4140] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.94.4/26] IPv6=[] ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" HandleID="k8s-pod-network.834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.191 [INFO][4121] k8s.go 386: Populated endpoint ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c683e090-e0a1-4042-b2a1-c22a1edf7207", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"", Pod:"coredns-5dd5756b68-lhffn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292ce8e59f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.194 [INFO][4121] k8s.go 387: Calico CNI using IPs: [192.168.94.4/32] ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.194 [INFO][4121] dataplane_linux.go 68: Setting the host side veth name to cali292ce8e59f1 ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.225 [INFO][4121] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.227 [INFO][4121] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c683e090-e0a1-4042-b2a1-c22a1edf7207", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498", Pod:"coredns-5dd5756b68-lhffn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292ce8e59f1", MAC:"f6:fe:f0:97:c6:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:15.257297 containerd[1279]: 2024-06-25 16:29:15.252 [INFO][4121] k8s.go 500: Wrote updated endpoint to datastore ContainerID="834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498" Namespace="kube-system" Pod="coredns-5dd5756b68-lhffn" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:15.287212 systemd-networkd[1093]: cali9d159a9d105: Gained IPv6LL Jun 25 16:29:15.393490 containerd[1279]: time="2024-06-25T16:29:15.391317041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:29:15.393490 containerd[1279]: time="2024-06-25T16:29:15.391476822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:15.393490 containerd[1279]: time="2024-06-25T16:29:15.391545611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:29:15.393490 containerd[1279]: time="2024-06-25T16:29:15.391569048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:29:15.513290 systemd[1]: Started cri-containerd-834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498.scope - libcontainer container 834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498. Jun 25 16:29:15.527988 kubelet[2237]: I0625 16:29:15.527361 2237 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:29:15.528435 kubelet[2237]: E0625 16:29:15.528191 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:15.556472 kubelet[2237]: E0625 16:29:15.555996 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:15.556717 kubelet[2237]: E0625 16:29:15.556530 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:15.599000 audit: BPF prog-id=172 op=LOAD Jun 25 16:29:15.601000 audit: BPF prog-id=173 op=LOAD Jun 25 16:29:15.601000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4177 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:15.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833346330626433623838363431376139343939373932376535373536 Jun 25 16:29:15.601000 audit: BPF prog-id=174 op=LOAD Jun 25 16:29:15.601000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4177 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:15.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833346330626433623838363431376139343939373932376535373536 Jun 25 16:29:15.602000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:29:15.603000 audit: BPF prog-id=173 op=UNLOAD Jun 25 16:29:15.603000 audit: BPF prog-id=175 op=LOAD Jun 25 16:29:15.603000 audit[4187]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4177 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:15.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833346330626433623838363431376139343939373932376535373536 Jun 25 16:29:15.624709 systemd[1]: run-containerd-runc-k8s.io-834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498-runc.8imcs8.mount: Deactivated successfully. Jun 25 16:29:15.799497 containerd[1279]: time="2024-06-25T16:29:15.799446824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lhffn,Uid:c683e090-e0a1-4042-b2a1-c22a1edf7207,Namespace:kube-system,Attempt:1,} returns sandbox id \"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498\"" Jun 25 16:29:15.802418 kubelet[2237]: E0625 16:29:15.801889 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:15.820238 containerd[1279]: time="2024-06-25T16:29:15.820190452Z" level=info msg="CreateContainer within sandbox \"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:29:15.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.92.91.188:22-139.178.89.65:45834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:15.848763 systemd[1]: Started sshd@8-164.92.91.188:22-139.178.89.65:45834.service - OpenSSH per-connection server daemon (139.178.89.65:45834). Jun 25 16:29:15.904481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2825878776.mount: Deactivated successfully. Jun 25 16:29:15.926609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465749734.mount: Deactivated successfully. Jun 25 16:29:15.959471 containerd[1279]: time="2024-06-25T16:29:15.959399092Z" level=info msg="CreateContainer within sandbox \"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78d85be2dc6fd1b2bc6e24e5ea97554b8ab5235fb72a7418cdbdc7f3a8dfb6f0\"" Jun 25 16:29:15.960577 containerd[1279]: time="2024-06-25T16:29:15.960533471Z" level=info msg="StartContainer for \"78d85be2dc6fd1b2bc6e24e5ea97554b8ab5235fb72a7418cdbdc7f3a8dfb6f0\"" Jun 25 16:29:16.045000 audit[4259]: NETFILTER_CFG table=filter:105 family=2 entries=9 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:16.045000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff79b79ed0 a2=0 a3=7fff79b79ebc items=0 ppid=2413 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:16.049000 audit[4259]: NETFILTER_CFG table=nat:106 family=2 entries=25 op=nft_register_chain pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:16.049000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7fff79b79ed0 a2=0 a3=7fff79b79ebc items=0 ppid=2413 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:16.074255 kernel: kauditd_printk_skb: 80 callbacks suppressed Jun 25 16:29:16.083125 kernel: audit: type=1101 audit(1719332956.070:618): pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.083197 kernel: audit: type=1103 audit(1719332956.074:619): pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.083223 kernel: audit: type=1006 audit(1719332956.074:620): pid=4226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 16:29:16.083250 kernel: audit: type=1300 audit(1719332956.074:620): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6da4b7c0 a2=3 a3=7f411b25b480 items=0 ppid=1 pid=4226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.083284 kernel: audit: type=1327 audit(1719332956.074:620): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:16.070000 audit[4226]: USER_ACCT pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.074000 audit[4226]: CRED_ACQ pid=4226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.074000 audit[4226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6da4b7c0 a2=3 a3=7f411b25b480 items=0 ppid=1 pid=4226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:16.083584 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 45834 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:16.080313 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:16.099951 systemd-logind[1272]: New session 9 of user core. Jun 25 16:29:16.103209 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:29:16.115877 kernel: audit: type=1105 audit(1719332956.111:621): pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.119346 kernel: audit: type=1103 audit(1719332956.115:622): pid=4276 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.111000 audit[4226]: USER_START pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.115000 audit[4276]: CRED_ACQ pid=4276 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.126191 systemd[1]: Started cri-containerd-78d85be2dc6fd1b2bc6e24e5ea97554b8ab5235fb72a7418cdbdc7f3a8dfb6f0.scope - libcontainer container 78d85be2dc6fd1b2bc6e24e5ea97554b8ab5235fb72a7418cdbdc7f3a8dfb6f0. Jun 25 16:29:16.176672 kernel: audit: type=1334 audit(1719332956.169:623): prog-id=176 op=LOAD Jun 25 16:29:16.176880 kernel: audit: type=1334 audit(1719332956.171:624): prog-id=177 op=LOAD Jun 25 16:29:16.176929 kernel: audit: type=1300 audit(1719332956.171:624): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4177 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.169000 audit: BPF prog-id=176 op=LOAD Jun 25 16:29:16.171000 audit: BPF prog-id=177 op=LOAD Jun 25 16:29:16.171000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4177 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643835626532646336666431623262633665323465356561393735 Jun 25 16:29:16.171000 audit: BPF prog-id=178 op=LOAD Jun 25 16:29:16.171000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4177 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643835626532646336666431623262633665323465356561393735 Jun 25 16:29:16.171000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:29:16.171000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:29:16.171000 audit: BPF prog-id=179 op=LOAD Jun 25 16:29:16.171000 audit[4263]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4177 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738643835626532646336666431623262633665323465356561393735 Jun 25 16:29:16.247243 systemd-networkd[1093]: cali292ce8e59f1: Gained IPv6LL Jun 25 16:29:16.305458 containerd[1279]: time="2024-06-25T16:29:16.305301407Z" level=info msg="StartContainer for \"78d85be2dc6fd1b2bc6e24e5ea97554b8ab5235fb72a7418cdbdc7f3a8dfb6f0\" returns successfully" Jun 25 16:29:16.404000 audit[4226]: USER_END pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.404000 audit[4226]: CRED_DISP pid=4226 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:16.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-164.92.91.188:22-139.178.89.65:45834 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:16.407894 systemd[1]: sshd@8-164.92.91.188:22-139.178.89.65:45834.service: Deactivated successfully. Jun 25 16:29:16.404582 sshd[4226]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:16.409439 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:29:16.410736 systemd-logind[1272]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:29:16.411760 systemd-logind[1272]: Removed session 9. Jun 25 16:29:16.586066 kubelet[2237]: E0625 16:29:16.581161 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:16.586066 kubelet[2237]: E0625 16:29:16.584062 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:16.614179 kubelet[2237]: I0625 16:29:16.614105 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lhffn" podStartSLOduration=38.614012966 podCreationTimestamp="2024-06-25 16:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:29:16.613557531 +0000 UTC m=+52.654869160" watchObservedRunningTime="2024-06-25 16:29:16.614012966 +0000 UTC m=+52.655324607" Jun 25 16:29:16.690236 systemd[1]: run-containerd-runc-k8s.io-08e3eddd8d23ea34dd6dd13f5db4c4e6ced91053e1ef02e2e015e420f6893300-runc.eHC2UE.mount: Deactivated successfully. Jun 25 16:29:16.713000 audit[4353]: NETFILTER_CFG table=filter:107 family=2 entries=8 op=nft_register_rule pid=4353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:16.713000 audit[4353]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe223b4410 a2=0 a3=7ffe223b43fc items=0 ppid=2413 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:16.748000 audit[4353]: NETFILTER_CFG table=nat:108 family=2 entries=56 op=nft_register_chain pid=4353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:16.748000 audit[4353]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe223b4410 a2=0 a3=7ffe223b43fc items=0 ppid=2413 pid=4353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:16.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:16.826256 containerd[1279]: time="2024-06-25T16:29:16.826161409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.827521 containerd[1279]: time="2024-06-25T16:29:16.827443229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:29:16.828494 containerd[1279]: time="2024-06-25T16:29:16.828436575Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.830889 containerd[1279]: time="2024-06-25T16:29:16.830834227Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.834619 containerd[1279]: time="2024-06-25T16:29:16.834569422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:16.836053 containerd[1279]: time="2024-06-25T16:29:16.836005078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.229175287s" Jun 25 16:29:16.836290 containerd[1279]: time="2024-06-25T16:29:16.836214185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:29:16.839795 containerd[1279]: time="2024-06-25T16:29:16.838881235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:29:16.886747 containerd[1279]: time="2024-06-25T16:29:16.886543432Z" level=info msg="CreateContainer within sandbox \"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:29:16.919841 containerd[1279]: time="2024-06-25T16:29:16.919789320Z" level=info msg="CreateContainer within sandbox \"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c\"" Jun 25 16:29:16.920824 containerd[1279]: time="2024-06-25T16:29:16.920792141Z" level=info msg="StartContainer for \"b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c\"" Jun 25 16:29:16.974371 systemd[1]: Started cri-containerd-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c.scope - libcontainer container b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c. Jun 25 16:29:17.024000 audit: BPF prog-id=180 op=LOAD Jun 25 16:29:17.024000 audit: BPF prog-id=181 op=LOAD Jun 25 16:29:17.024000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=3834 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313063616339366233323761323261613830393433663535366366 Jun 25 16:29:17.025000 audit: BPF prog-id=182 op=LOAD Jun 25 16:29:17.025000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=3834 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313063616339366233323761323261613830393433663535366366 Jun 25 16:29:17.026000 audit: BPF prog-id=182 op=UNLOAD Jun 25 16:29:17.026000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:29:17.026000 audit: BPF prog-id=183 op=LOAD Jun 25 16:29:17.026000 audit[4371]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=3834 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234313063616339366233323761323261613830393433663535366366 Jun 25 16:29:17.115122 systemd-networkd[1093]: vxlan.calico: Link UP Jun 25 16:29:17.115130 systemd-networkd[1093]: vxlan.calico: Gained carrier Jun 25 16:29:17.153000 audit: BPF prog-id=184 op=LOAD Jun 25 16:29:17.153000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9e632370 a2=70 a3=7f682a0da000 items=0 ppid=4193 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.153000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:17.153000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:29:17.153000 audit: BPF prog-id=185 op=LOAD Jun 25 16:29:17.153000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe9e632370 a2=70 a3=6f items=0 ppid=4193 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.153000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:17.153000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:29:17.153000 audit: BPF prog-id=186 op=LOAD Jun 25 16:29:17.153000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe9e632300 a2=70 a3=7ffe9e632370 items=0 ppid=4193 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.153000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:17.154000 audit: BPF prog-id=186 op=UNLOAD Jun 25 16:29:17.155000 audit: BPF prog-id=187 op=LOAD Jun 25 16:29:17.155000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe9e632330 a2=70 a3=0 items=0 ppid=4193 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:29:17.182000 audit: BPF prog-id=187 op=UNLOAD Jun 25 16:29:17.192374 containerd[1279]: time="2024-06-25T16:29:17.187145071Z" level=info msg="StartContainer for \"b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c\" returns successfully" Jun 25 16:29:17.375000 audit[4445]: NETFILTER_CFG table=raw:109 family=2 entries=19 op=nft_register_chain pid=4445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:17.375000 audit[4445]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fffca0f0730 a2=0 a3=7fffca0f071c items=0 ppid=4193 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.375000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:17.436000 audit[4449]: NETFILTER_CFG table=nat:110 family=2 entries=15 op=nft_register_chain pid=4449 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:17.436000 audit[4453]: NETFILTER_CFG table=mangle:111 family=2 entries=16 op=nft_register_chain pid=4453 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:17.436000 audit[4453]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdb7d3aa50 a2=0 a3=555c2721a000 items=0 ppid=4193 pid=4453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:17.436000 audit[4449]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe20fae580 a2=0 a3=7ffe20fae56c items=0 ppid=4193 pid=4449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:17.440000 audit[4450]: NETFILTER_CFG table=filter:112 family=2 entries=147 op=nft_register_chain pid=4450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:29:17.440000 audit[4450]: SYSCALL arch=c000003e syscall=46 success=yes exit=83712 a0=3 a1=7ffed29e8f40 a2=0 a3=7ffed29e8f2c items=0 ppid=4193 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:17.440000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:29:17.505000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:17.506000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:17.506000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000f9c060 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:17.506000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:17.505000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001cd7980 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:17.505000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:17.635921 kubelet[2237]: E0625 16:29:17.635749 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:18.509839 containerd[1279]: time="2024-06-25T16:29:18.509759686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:18.513138 containerd[1279]: time="2024-06-25T16:29:18.513048056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:29:18.550263 containerd[1279]: time="2024-06-25T16:29:18.550207131Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:18.561774 containerd[1279]: time="2024-06-25T16:29:18.561730096Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:18.564767 containerd[1279]: time="2024-06-25T16:29:18.564699728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:29:18.567768 containerd[1279]: time="2024-06-25T16:29:18.567679504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.726984516s" Jun 25 16:29:18.568256 containerd[1279]: time="2024-06-25T16:29:18.568209348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:29:18.571493 containerd[1279]: time="2024-06-25T16:29:18.571429583Z" level=info msg="CreateContainer within sandbox \"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:29:18.597177 containerd[1279]: time="2024-06-25T16:29:18.597110463Z" level=info msg="CreateContainer within sandbox \"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec\"" Jun 25 16:29:18.598502 containerd[1279]: time="2024-06-25T16:29:18.598447471Z" level=info msg="StartContainer for \"f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec\"" Jun 25 16:29:18.767098 systemd[1]: run-containerd-runc-k8s.io-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c-runc.lvbLjD.mount: Deactivated successfully. Jun 25 16:29:18.800512 systemd[1]: Started cri-containerd-f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec.scope - libcontainer container f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec. Jun 25 16:29:18.877000 audit: BPF prog-id=188 op=LOAD Jun 25 16:29:18.877000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3716 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:18.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393235396130376433323962356132613739663635313865313965 Jun 25 16:29:18.877000 audit: BPF prog-id=189 op=LOAD Jun 25 16:29:18.877000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3716 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:18.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393235396130376433323962356132613739663635313865313965 Jun 25 16:29:18.877000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:29:18.877000 audit: BPF prog-id=188 op=UNLOAD Jun 25 16:29:18.877000 audit: BPF prog-id=190 op=LOAD Jun 25 16:29:18.877000 audit[4468]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3716 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:18.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638393235396130376433323962356132613739663635313865313965 Jun 25 16:29:18.904295 kubelet[2237]: I0625 16:29:18.904122 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66c968684-ncbrr" podStartSLOduration=26.866534785 podCreationTimestamp="2024-06-25 16:28:47 +0000 UTC" firstStartedPulling="2024-06-25 16:29:11.800111409 +0000 UTC m=+47.841423026" lastFinishedPulling="2024-06-25 16:29:16.837642721 +0000 UTC m=+52.878954357" observedRunningTime="2024-06-25 16:29:17.67199572 +0000 UTC m=+53.713307362" watchObservedRunningTime="2024-06-25 16:29:18.904066116 +0000 UTC m=+54.945377754" Jun 25 16:29:18.909807 containerd[1279]: time="2024-06-25T16:29:18.909740228Z" level=info msg="StartContainer for \"f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec\" returns successfully" Jun 25 16:29:18.999291 systemd-networkd[1093]: vxlan.calico: Gained IPv6LL Jun 25 16:29:19.536000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.536000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c011fc5e30 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.536000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.537000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.537000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c0113a6e60 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.537000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.549662 kubelet[2237]: I0625 16:29:19.549601 2237 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:29:19.549000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=524877 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.549000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c01280cc60 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.549000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.549000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=524883 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.549000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c011fc5ec0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.549000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.560257 kubelet[2237]: I0625 16:29:19.560211 2237 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:29:19.562000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.562000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c0113a7100 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.562000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.562000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:19.562000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c01280d050 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:29:19.562000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:29:19.702236 systemd[1]: run-containerd-runc-k8s.io-f89259a07d329b5a2a79f6518e19e4a220fb7da2d11c4c096cc753d9ec8935ec-runc.5lLWBR.mount: Deactivated successfully. Jun 25 16:29:21.422635 systemd[1]: Started sshd@9-164.92.91.188:22-139.178.89.65:41056.service - OpenSSH per-connection server daemon (139.178.89.65:41056). Jun 25 16:29:21.429517 kernel: kauditd_printk_skb: 93 callbacks suppressed Jun 25 16:29:21.429575 kernel: audit: type=1130 audit(1719332961.422:665): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.91.188:22-139.178.89.65:41056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:21.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.91.188:22-139.178.89.65:41056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:21.511000 audit[4520]: USER_ACCT pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.512000 audit[4520]: CRED_ACQ pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.515248 sshd[4520]: Accepted publickey for core from 139.178.89.65 port 41056 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:21.516594 kernel: audit: type=1101 audit(1719332961.511:666): pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.516793 kernel: audit: type=1103 audit(1719332961.512:667): pid=4520 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.517174 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:21.512000 audit[4520]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8990e200 a2=3 a3=7f542df7f480 items=0 ppid=1 pid=4520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.520738 kernel: audit: type=1006 audit(1719332961.512:668): pid=4520 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:29:21.520879 kernel: audit: type=1300 audit(1719332961.512:668): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8990e200 a2=3 a3=7f542df7f480 items=0 ppid=1 pid=4520 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:21.512000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:21.523153 kernel: audit: type=1327 audit(1719332961.512:668): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:21.529076 systemd-logind[1272]: New session 10 of user core. Jun 25 16:29:21.534322 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:29:21.542000 audit[4520]: USER_START pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.547532 kernel: audit: type=1105 audit(1719332961.542:669): pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.547658 kernel: audit: type=1103 audit(1719332961.545:670): pid=4522 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.545000 audit[4522]: CRED_ACQ pid=4522 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.919333 sshd[4520]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:21.919000 audit[4520]: USER_END pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.925825 kernel: audit: type=1106 audit(1719332961.919:671): pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.926035 kernel: audit: type=1104 audit(1719332961.920:672): pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.920000 audit[4520]: CRED_DISP pid=4520 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:21.927502 systemd[1]: sshd@9-164.92.91.188:22-139.178.89.65:41056.service: Deactivated successfully. Jun 25 16:29:21.928599 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:29:21.930605 systemd-logind[1272]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:29:21.932263 systemd-logind[1272]: Removed session 10. Jun 25 16:29:21.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-164.92.91.188:22-139.178.89.65:41056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:24.215407 containerd[1279]: time="2024-06-25T16:29:24.214698426Z" level=info msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.298 [WARNING][4554] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a3072e8-f1d4-4a0b-a333-9167360f3eb4", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261", Pod:"csi-node-driver-q78rw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09c0dc3e679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.298 [INFO][4554] k8s.go 608: Cleaning up netns ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.298 [INFO][4554] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" iface="eth0" netns="" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.298 [INFO][4554] k8s.go 615: Releasing IP address(es) ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.298 [INFO][4554] utils.go 188: Calico CNI releasing IP address ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.355 [INFO][4560] ipam_plugin.go 411: Releasing address using handleID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.355 [INFO][4560] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.356 [INFO][4560] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.367 [WARNING][4560] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.367 [INFO][4560] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.372 [INFO][4560] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:24.382033 containerd[1279]: 2024-06-25 16:29:24.374 [INFO][4554] k8s.go 621: Teardown processing complete. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.382809 containerd[1279]: time="2024-06-25T16:29:24.382109137Z" level=info msg="TearDown network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" successfully" Jun 25 16:29:24.382809 containerd[1279]: time="2024-06-25T16:29:24.382154870Z" level=info msg="StopPodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" returns successfully" Jun 25 16:29:24.384208 containerd[1279]: time="2024-06-25T16:29:24.384155283Z" level=info msg="RemovePodSandbox for \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" Jun 25 16:29:24.410367 containerd[1279]: time="2024-06-25T16:29:24.384203015Z" level=info msg="Forcibly stopping sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\"" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.491 [WARNING][4581] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a3072e8-f1d4-4a0b-a333-9167360f3eb4", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"58101092124e187921ffb830e15ce5cd6f1aaf6d02a831646247bd9a8c093261", Pod:"csi-node-driver-q78rw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.94.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09c0dc3e679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.492 [INFO][4581] k8s.go 608: Cleaning up netns ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.492 [INFO][4581] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" iface="eth0" netns="" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.492 [INFO][4581] k8s.go 615: Releasing IP address(es) ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.492 [INFO][4581] utils.go 188: Calico CNI releasing IP address ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.534 [INFO][4587] ipam_plugin.go 411: Releasing address using handleID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.535 [INFO][4587] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.535 [INFO][4587] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.551 [WARNING][4587] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.552 [INFO][4587] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" HandleID="k8s-pod-network.a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Workload="ci--3815.2.4--a--1561673ea7-k8s-csi--node--driver--q78rw-eth0" Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.555 [INFO][4587] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:24.560249 containerd[1279]: 2024-06-25 16:29:24.557 [INFO][4581] k8s.go 621: Teardown processing complete. ContainerID="a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a" Jun 25 16:29:24.561062 containerd[1279]: time="2024-06-25T16:29:24.560273736Z" level=info msg="TearDown network for sandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" successfully" Jun 25 16:29:24.585015 containerd[1279]: time="2024-06-25T16:29:24.584892774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:24.592599 containerd[1279]: time="2024-06-25T16:29:24.592473001Z" level=info msg="RemovePodSandbox \"a52e6cab2c3160a9166c488c2d96573b0dea4261cebe6b9fe89a163fdb507d3a\" returns successfully" Jun 25 16:29:24.593394 containerd[1279]: time="2024-06-25T16:29:24.593360570Z" level=info msg="StopPodSandbox for \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\"" Jun 25 16:29:24.618672 containerd[1279]: time="2024-06-25T16:29:24.593628643Z" level=info msg="TearDown network for sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" successfully" Jun 25 16:29:24.619024 containerd[1279]: time="2024-06-25T16:29:24.618988538Z" level=info msg="StopPodSandbox for \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" returns successfully" Jun 25 16:29:24.619606 containerd[1279]: time="2024-06-25T16:29:24.619579657Z" level=info msg="RemovePodSandbox for \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\"" Jun 25 16:29:24.619792 containerd[1279]: time="2024-06-25T16:29:24.619741035Z" level=info msg="Forcibly stopping sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\"" Jun 25 16:29:24.620104 containerd[1279]: time="2024-06-25T16:29:24.619932922Z" level=info msg="TearDown network for sandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" successfully" Jun 25 16:29:24.627043 containerd[1279]: time="2024-06-25T16:29:24.626746185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:24.627457 containerd[1279]: time="2024-06-25T16:29:24.627400918Z" level=info msg="RemovePodSandbox \"9bd6eab87bfef84f26717a7877470b3209a21af22b4516f3abe0133d2ee1cc28\" returns successfully" Jun 25 16:29:24.628117 containerd[1279]: time="2024-06-25T16:29:24.628064793Z" level=info msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.719 [WARNING][4606] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0", GenerateName:"calico-kube-controllers-66c968684-", Namespace:"calico-system", SelfLink:"", UID:"ea16791a-e4ba-4de5-bc62-0b62890d911f", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c968684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802", Pod:"calico-kube-controllers-66c968684-ncbrr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc395c205f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.720 [INFO][4606] k8s.go 608: Cleaning up netns ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.720 [INFO][4606] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" iface="eth0" netns="" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.720 [INFO][4606] k8s.go 615: Releasing IP address(es) ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.720 [INFO][4606] utils.go 188: Calico CNI releasing IP address ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.773 [INFO][4612] ipam_plugin.go 411: Releasing address using handleID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.774 [INFO][4612] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.774 [INFO][4612] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.787 [WARNING][4612] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.787 [INFO][4612] ipam_plugin.go 439: Releasing address using workloadID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.791 [INFO][4612] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:24.796685 containerd[1279]: 2024-06-25 16:29:24.794 [INFO][4606] k8s.go 621: Teardown processing complete. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.797666 containerd[1279]: time="2024-06-25T16:29:24.797622494Z" level=info msg="TearDown network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" successfully" Jun 25 16:29:24.797790 containerd[1279]: time="2024-06-25T16:29:24.797765120Z" level=info msg="StopPodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" returns successfully" Jun 25 16:29:24.799336 containerd[1279]: time="2024-06-25T16:29:24.799288793Z" level=info msg="RemovePodSandbox for \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" Jun 25 16:29:24.799694 containerd[1279]: time="2024-06-25T16:29:24.799600818Z" level=info msg="Forcibly stopping sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\"" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.859 [WARNING][4633] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0", GenerateName:"calico-kube-controllers-66c968684-", Namespace:"calico-system", SelfLink:"", UID:"ea16791a-e4ba-4de5-bc62-0b62890d911f", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66c968684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"ba697936fddfa4e5a8177a6d3fab3d7983241c411001e7f232bd5771fb329802", Pod:"calico-kube-controllers-66c968684-ncbrr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc395c205f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.859 [INFO][4633] k8s.go 608: Cleaning up netns ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.859 [INFO][4633] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" iface="eth0" netns="" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.859 [INFO][4633] k8s.go 615: Releasing IP address(es) ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.859 [INFO][4633] utils.go 188: Calico CNI releasing IP address ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.897 [INFO][4639] ipam_plugin.go 411: Releasing address using handleID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.897 [INFO][4639] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.897 [INFO][4639] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.906 [WARNING][4639] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.906 [INFO][4639] ipam_plugin.go 439: Releasing address using workloadID ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" HandleID="k8s-pod-network.aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--kube--controllers--66c968684--ncbrr-eth0" Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.909 [INFO][4639] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:24.916329 containerd[1279]: 2024-06-25 16:29:24.912 [INFO][4633] k8s.go 621: Teardown processing complete. ContainerID="aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5" Jun 25 16:29:24.916329 containerd[1279]: time="2024-06-25T16:29:24.916211117Z" level=info msg="TearDown network for sandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" successfully" Jun 25 16:29:24.920136 containerd[1279]: time="2024-06-25T16:29:24.920045847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:24.920136 containerd[1279]: time="2024-06-25T16:29:24.920130651Z" level=info msg="RemovePodSandbox \"aff98b4b0ab001f0c6c9bd9afdd0b92610cc8af71a3bb22d963f4c91a84972d5\" returns successfully" Jun 25 16:29:24.920645 containerd[1279]: time="2024-06-25T16:29:24.920598909Z" level=info msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:24.974 [WARNING][4658] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7f99e445-217f-48d6-b1a5-92f613923722", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4", Pod:"coredns-5dd5756b68-jnbj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d159a9d105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:24.975 [INFO][4658] k8s.go 608: Cleaning up netns ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:24.975 [INFO][4658] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" iface="eth0" netns="" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:24.975 [INFO][4658] k8s.go 615: Releasing IP address(es) ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:24.975 [INFO][4658] utils.go 188: Calico CNI releasing IP address ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.006 [INFO][4664] ipam_plugin.go 411: Releasing address using handleID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.006 [INFO][4664] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.006 [INFO][4664] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.015 [WARNING][4664] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.015 [INFO][4664] ipam_plugin.go 439: Releasing address using workloadID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.019 [INFO][4664] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:25.024143 containerd[1279]: 2024-06-25 16:29:25.021 [INFO][4658] k8s.go 621: Teardown processing complete. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.024838 containerd[1279]: time="2024-06-25T16:29:25.024193740Z" level=info msg="TearDown network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" successfully" Jun 25 16:29:25.024838 containerd[1279]: time="2024-06-25T16:29:25.024246166Z" level=info msg="StopPodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" returns successfully" Jun 25 16:29:25.024838 containerd[1279]: time="2024-06-25T16:29:25.024820420Z" level=info msg="RemovePodSandbox for \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" Jun 25 16:29:25.025089 containerd[1279]: time="2024-06-25T16:29:25.024857789Z" level=info msg="Forcibly stopping sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\"" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.084 [WARNING][4682] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"7f99e445-217f-48d6-b1a5-92f613923722", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"961de25c49c027d3017360a7aee9b45fe14be9f0da3ed3aadab2d85871467ca4", Pod:"coredns-5dd5756b68-jnbj2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d159a9d105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.084 [INFO][4682] k8s.go 608: Cleaning up netns ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.085 [INFO][4682] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" iface="eth0" netns="" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.085 [INFO][4682] k8s.go 615: Releasing IP address(es) ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.085 [INFO][4682] utils.go 188: Calico CNI releasing IP address ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.122 [INFO][4688] ipam_plugin.go 411: Releasing address using handleID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.122 [INFO][4688] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.123 [INFO][4688] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.132 [WARNING][4688] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.132 [INFO][4688] ipam_plugin.go 439: Releasing address using workloadID ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" HandleID="k8s-pod-network.879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--jnbj2-eth0" Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.135 [INFO][4688] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:25.140758 containerd[1279]: 2024-06-25 16:29:25.138 [INFO][4682] k8s.go 621: Teardown processing complete. ContainerID="879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b" Jun 25 16:29:25.141602 containerd[1279]: time="2024-06-25T16:29:25.141554018Z" level=info msg="TearDown network for sandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" successfully" Jun 25 16:29:25.145625 containerd[1279]: time="2024-06-25T16:29:25.145558954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:25.145863 containerd[1279]: time="2024-06-25T16:29:25.145835559Z" level=info msg="RemovePodSandbox \"879a40b563dddc0c043c2a984b0dd8c85ee98dff8399fd967ff174e7d7491b0b\" returns successfully" Jun 25 16:29:25.146614 containerd[1279]: time="2024-06-25T16:29:25.146569652Z" level=info msg="StopPodSandbox for \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\"" Jun 25 16:29:25.146779 containerd[1279]: time="2024-06-25T16:29:25.146716816Z" level=info msg="TearDown network for sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" successfully" Jun 25 16:29:25.146917 containerd[1279]: time="2024-06-25T16:29:25.146783453Z" level=info msg="StopPodSandbox for \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" returns successfully" Jun 25 16:29:25.147509 containerd[1279]: time="2024-06-25T16:29:25.147475960Z" level=info msg="RemovePodSandbox for \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\"" Jun 25 16:29:25.147913 containerd[1279]: time="2024-06-25T16:29:25.147694188Z" level=info msg="Forcibly stopping sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\"" Jun 25 16:29:25.148038 containerd[1279]: time="2024-06-25T16:29:25.148014394Z" level=info msg="TearDown network for sandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" successfully" Jun 25 16:29:25.152879 containerd[1279]: time="2024-06-25T16:29:25.152827319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:25.153257 containerd[1279]: time="2024-06-25T16:29:25.153213460Z" level=info msg="RemovePodSandbox \"c3ac7604ddd8ea30f04397ed4b8d00e316e68592c82994636dd89d1f3b82dbbf\" returns successfully" Jun 25 16:29:25.154226 containerd[1279]: time="2024-06-25T16:29:25.154165633Z" level=info msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.206 [WARNING][4706] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c683e090-e0a1-4042-b2a1-c22a1edf7207", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498", Pod:"coredns-5dd5756b68-lhffn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292ce8e59f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.207 [INFO][4706] k8s.go 608: Cleaning up netns ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.207 [INFO][4706] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" iface="eth0" netns="" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.207 [INFO][4706] k8s.go 615: Releasing IP address(es) ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.207 [INFO][4706] utils.go 188: Calico CNI releasing IP address ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.237 [INFO][4712] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.237 [INFO][4712] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.237 [INFO][4712] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.248 [WARNING][4712] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.248 [INFO][4712] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.252 [INFO][4712] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:25.258929 containerd[1279]: 2024-06-25 16:29:25.255 [INFO][4706] k8s.go 621: Teardown processing complete. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.258929 containerd[1279]: time="2024-06-25T16:29:25.258236350Z" level=info msg="TearDown network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" successfully" Jun 25 16:29:25.258929 containerd[1279]: time="2024-06-25T16:29:25.258442503Z" level=info msg="StopPodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" returns successfully" Jun 25 16:29:25.261884 containerd[1279]: time="2024-06-25T16:29:25.261011590Z" level=info msg="RemovePodSandbox for \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" Jun 25 16:29:25.261884 containerd[1279]: time="2024-06-25T16:29:25.261259607Z" level=info msg="Forcibly stopping sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\"" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.320 [WARNING][4732] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c683e090-e0a1-4042-b2a1-c22a1edf7207", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 28, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"834c0bd3b886417a94997927e5756cc0ec9bbe256164507c25dc19d249227498", Pod:"coredns-5dd5756b68-lhffn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali292ce8e59f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.321 [INFO][4732] k8s.go 608: Cleaning up netns ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.321 [INFO][4732] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" iface="eth0" netns="" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.321 [INFO][4732] k8s.go 615: Releasing IP address(es) ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.321 [INFO][4732] utils.go 188: Calico CNI releasing IP address ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.368 [INFO][4738] ipam_plugin.go 411: Releasing address using handleID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.368 [INFO][4738] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.369 [INFO][4738] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.377 [WARNING][4738] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.377 [INFO][4738] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" HandleID="k8s-pod-network.3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Workload="ci--3815.2.4--a--1561673ea7-k8s-coredns--5dd5756b68--lhffn-eth0" Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.380 [INFO][4738] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:29:25.385214 containerd[1279]: 2024-06-25 16:29:25.382 [INFO][4732] k8s.go 621: Teardown processing complete. ContainerID="3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c" Jun 25 16:29:25.386413 containerd[1279]: time="2024-06-25T16:29:25.386168678Z" level=info msg="TearDown network for sandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" successfully" Jun 25 16:29:25.389862 containerd[1279]: time="2024-06-25T16:29:25.389802359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:29:25.390163 containerd[1279]: time="2024-06-25T16:29:25.390128305Z" level=info msg="RemovePodSandbox \"3d00bc1c9bea81acd45e35c4636c1d38ba1f254eef308869715ac355ad00027c\" returns successfully" Jun 25 16:29:26.943539 systemd[1]: Started sshd@10-164.92.91.188:22-139.178.89.65:55624.service - OpenSSH per-connection server daemon (139.178.89.65:55624). Jun 25 16:29:26.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.92.91.188:22-139.178.89.65:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:26.947364 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:26.947522 kernel: audit: type=1130 audit(1719332966.943:674): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.92.91.188:22-139.178.89.65:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:27.031000 audit[4750]: USER_ACCT pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.032973 sshd[4750]: Accepted publickey for core from 139.178.89.65 port 55624 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:27.035258 kernel: audit: type=1101 audit(1719332967.031:675): pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.034000 audit[4750]: CRED_ACQ pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.039397 kernel: audit: type=1103 audit(1719332967.034:676): pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.039553 kernel: audit: type=1006 audit(1719332967.035:677): pid=4750 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:29:27.039901 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:27.035000 audit[4750]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7adc1e20 a2=3 a3=7f364bf38480 items=0 ppid=1 pid=4750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:27.042739 kernel: audit: type=1300 audit(1719332967.035:677): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7adc1e20 a2=3 a3=7f364bf38480 items=0 ppid=1 pid=4750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:27.047855 kernel: audit: type=1327 audit(1719332967.035:677): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:27.035000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:27.054271 systemd-logind[1272]: New session 11 of user core. Jun 25 16:29:27.060241 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:29:27.066000 audit[4750]: USER_START pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.072124 kernel: audit: type=1105 audit(1719332967.066:678): pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.072000 audit[4752]: CRED_ACQ pid=4752 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.078606 kernel: audit: type=1103 audit(1719332967.072:679): pid=4752 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.274739 sshd[4750]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:27.278000 audit[4750]: USER_END pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.288319 kernel: audit: type=1106 audit(1719332967.278:680): pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.288470 kernel: audit: type=1104 audit(1719332967.278:681): pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.278000 audit[4750]: CRED_DISP pid=4750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.286614 systemd[1]: sshd@10-164.92.91.188:22-139.178.89.65:55624.service: Deactivated successfully. Jun 25 16:29:27.289925 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:29:27.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-164.92.91.188:22-139.178.89.65:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:27.293704 systemd-logind[1272]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:29:27.301675 systemd[1]: Started sshd@11-164.92.91.188:22-139.178.89.65:55626.service - OpenSSH per-connection server daemon (139.178.89.65:55626). Jun 25 16:29:27.304246 systemd-logind[1272]: Removed session 11. Jun 25 16:29:27.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.92.91.188:22-139.178.89.65:55626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:27.343000 audit[4763]: USER_ACCT pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.347358 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 55626 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:27.348000 audit[4763]: CRED_ACQ pid=4763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.348000 audit[4763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5d6745e0 a2=3 a3=7f003c97a480 items=0 ppid=1 pid=4763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:27.348000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:27.349538 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:27.356449 systemd-logind[1272]: New session 12 of user core. Jun 25 16:29:27.360233 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:29:27.367000 audit[4763]: USER_START pid=4763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.369000 audit[4765]: CRED_ACQ pid=4765 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.925772 sshd[4763]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:27.940000 audit[4763]: USER_END pid=4763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.940000 audit[4763]: CRED_DISP pid=4763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:27.954782 systemd[1]: Started sshd@12-164.92.91.188:22-139.178.89.65:55634.service - OpenSSH per-connection server daemon (139.178.89.65:55634). Jun 25 16:29:27.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.92.91.188:22-139.178.89.65:55634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:27.958915 systemd[1]: sshd@11-164.92.91.188:22-139.178.89.65:55626.service: Deactivated successfully. Jun 25 16:29:27.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-164.92.91.188:22-139.178.89.65:55626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:27.960402 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:29:27.964695 systemd-logind[1272]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:29:27.966845 systemd-logind[1272]: Removed session 12. Jun 25 16:29:28.029000 audit[4780]: USER_ACCT pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.032111 sshd[4780]: Accepted publickey for core from 139.178.89.65 port 55634 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:28.031000 audit[4780]: CRED_ACQ pid=4780 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.032000 audit[4780]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb7d36b20 a2=3 a3=7f6a5020c480 items=0 ppid=1 pid=4780 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:28.032000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:28.034438 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:28.041019 systemd-logind[1272]: New session 13 of user core. Jun 25 16:29:28.044239 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:29:28.050000 audit[4780]: USER_START pid=4780 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.052000 audit[4783]: CRED_ACQ pid=4783 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.242312 sshd[4780]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:28.244000 audit[4780]: USER_END pid=4780 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.244000 audit[4780]: CRED_DISP pid=4780 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:28.247978 systemd-logind[1272]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:29:28.250076 systemd[1]: sshd@12-164.92.91.188:22-139.178.89.65:55634.service: Deactivated successfully. Jun 25 16:29:28.250991 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:29:28.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-164.92.91.188:22-139.178.89.65:55634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:28.252860 systemd-logind[1272]: Removed session 13. Jun 25 16:29:28.557851 systemd[1]: run-containerd-runc-k8s.io-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c-runc.BofXpy.mount: Deactivated successfully. Jun 25 16:29:33.212171 kubelet[2237]: E0625 16:29:33.212108 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:33.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.91.188:22-139.178.89.65:55636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:33.264732 systemd[1]: Started sshd@13-164.92.91.188:22-139.178.89.65:55636.service - OpenSSH per-connection server daemon (139.178.89.65:55636). Jun 25 16:29:33.267142 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:29:33.267289 kernel: audit: type=1130 audit(1719332973.264:701): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.91.188:22-139.178.89.65:55636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:33.308000 audit[4816]: USER_ACCT pid=4816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.309784 sshd[4816]: Accepted publickey for core from 139.178.89.65 port 55636 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:33.313992 kernel: audit: type=1101 audit(1719332973.308:702): pid=4816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.314000 audit[4816]: CRED_ACQ pid=4816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.316359 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:33.320101 kernel: audit: type=1103 audit(1719332973.314:703): pid=4816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.320852 kernel: audit: type=1006 audit(1719332973.314:704): pid=4816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 16:29:33.314000 audit[4816]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc771b4290 a2=3 a3=7f0e6ed70480 items=0 ppid=1 pid=4816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.327412 kernel: audit: type=1300 audit(1719332973.314:704): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc771b4290 a2=3 a3=7f0e6ed70480 items=0 ppid=1 pid=4816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:33.327570 kernel: audit: type=1327 audit(1719332973.314:704): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:33.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:33.334084 systemd-logind[1272]: New session 14 of user core. Jun 25 16:29:33.336242 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:29:33.343000 audit[4816]: USER_START pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.350066 kernel: audit: type=1105 audit(1719332973.343:705): pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.346000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.355983 kernel: audit: type=1103 audit(1719332973.346:706): pid=4818 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.505006 sshd[4816]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:33.511000 audit[4816]: USER_END pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.515000 audit[4816]: CRED_DISP pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.520530 kernel: audit: type=1106 audit(1719332973.511:707): pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.520752 kernel: audit: type=1104 audit(1719332973.515:708): pid=4816 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:33.521932 systemd[1]: sshd@13-164.92.91.188:22-139.178.89.65:55636.service: Deactivated successfully. Jun 25 16:29:33.523198 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:29:33.524928 systemd-logind[1272]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:29:33.526613 systemd-logind[1272]: Removed session 14. Jun 25 16:29:33.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-164.92.91.188:22-139.178.89.65:55636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:34.203495 kubelet[2237]: E0625 16:29:34.203439 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:36.264000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:36.264000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028b2a60 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:36.264000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:36.264000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:36.264000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0028b2a80 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:36.264000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:36.270000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:36.270000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00277ae20 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:36.270000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:36.271000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:29:36.271000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00277afc0 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:29:36.271000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:29:38.536077 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:29:38.536208 kernel: audit: type=1130 audit(1719332978.530:714): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.91.188:22-139.178.89.65:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:38.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.91.188:22-139.178.89.65:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:38.530899 systemd[1]: Started sshd@14-164.92.91.188:22-139.178.89.65:47544.service - OpenSSH per-connection server daemon (139.178.89.65:47544). Jun 25 16:29:38.585000 audit[4841]: USER_ACCT pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.587335 sshd[4841]: Accepted publickey for core from 139.178.89.65 port 47544 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:38.589691 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:38.591991 kernel: audit: type=1101 audit(1719332978.585:715): pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.588000 audit[4841]: CRED_ACQ pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.599080 kernel: audit: type=1103 audit(1719332978.588:716): pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.603162 kernel: audit: type=1006 audit(1719332978.588:717): pid=4841 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:29:38.588000 audit[4841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcda338eb0 a2=3 a3=7f84db1ae480 items=0 ppid=1 pid=4841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:38.609143 kernel: audit: type=1300 audit(1719332978.588:717): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcda338eb0 a2=3 a3=7f84db1ae480 items=0 ppid=1 pid=4841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:38.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:38.615145 kernel: audit: type=1327 audit(1719332978.588:717): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:38.617314 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:29:38.618396 systemd-logind[1272]: New session 15 of user core. Jun 25 16:29:38.635816 kernel: audit: type=1105 audit(1719332978.626:718): pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.636013 kernel: audit: type=1103 audit(1719332978.633:719): pid=4843 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.626000 audit[4841]: USER_START pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.633000 audit[4843]: CRED_ACQ pid=4843 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.805077 sshd[4841]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:38.806000 audit[4841]: USER_END pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.811986 kernel: audit: type=1106 audit(1719332978.806:720): pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.806000 audit[4841]: CRED_DISP pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.818493 kernel: audit: type=1104 audit(1719332978.806:721): pid=4841 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:38.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-164.92.91.188:22-139.178.89.65:47544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:38.816493 systemd[1]: sshd@14-164.92.91.188:22-139.178.89.65:47544.service: Deactivated successfully. Jun 25 16:29:38.817522 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:29:38.820037 systemd-logind[1272]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:29:38.821828 systemd-logind[1272]: Removed session 15. Jun 25 16:29:41.204156 kubelet[2237]: E0625 16:29:41.204113 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:43.823698 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:43.823871 kernel: audit: type=1130 audit(1719332983.819:723): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.91.188:22-139.178.89.65:47556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:43.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.91.188:22-139.178.89.65:47556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:43.820597 systemd[1]: Started sshd@15-164.92.91.188:22-139.178.89.65:47556.service - OpenSSH per-connection server daemon (139.178.89.65:47556). Jun 25 16:29:43.863000 audit[4857]: USER_ACCT pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.864457 sshd[4857]: Accepted publickey for core from 139.178.89.65 port 47556 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:43.869980 kernel: audit: type=1101 audit(1719332983.863:724): pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.869000 audit[4857]: CRED_ACQ pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.871511 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:43.877082 kernel: audit: type=1103 audit(1719332983.869:725): pid=4857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.877248 kernel: audit: type=1006 audit(1719332983.870:726): pid=4857 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:29:43.870000 audit[4857]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf59d92c0 a2=3 a3=7f84f5418480 items=0 ppid=1 pid=4857 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:43.870000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:43.888629 kernel: audit: type=1300 audit(1719332983.870:726): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf59d92c0 a2=3 a3=7f84f5418480 items=0 ppid=1 pid=4857 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:43.888701 kernel: audit: type=1327 audit(1719332983.870:726): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:43.891174 systemd-logind[1272]: New session 16 of user core. Jun 25 16:29:43.893269 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:29:43.901000 audit[4857]: USER_START pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.907233 kernel: audit: type=1105 audit(1719332983.901:727): pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.907397 kernel: audit: type=1103 audit(1719332983.904:728): pid=4859 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:43.904000 audit[4859]: CRED_ACQ pid=4859 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:44.060195 sshd[4857]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:44.060000 audit[4857]: USER_END pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:44.068041 kernel: audit: type=1106 audit(1719332984.060:729): pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:44.060000 audit[4857]: CRED_DISP pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:44.070351 systemd[1]: sshd@15-164.92.91.188:22-139.178.89.65:47556.service: Deactivated successfully. Jun 25 16:29:44.071558 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:29:44.074029 kernel: audit: type=1104 audit(1719332984.060:730): pid=4857 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:44.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-164.92.91.188:22-139.178.89.65:47556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:44.074711 systemd-logind[1272]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:29:44.076720 systemd-logind[1272]: Removed session 16. Jun 25 16:29:45.630689 kubelet[2237]: E0625 16:29:45.630654 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:29:45.660879 kubelet[2237]: I0625 16:29:45.660794 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-q78rw" podStartSLOduration=52.189331799 podCreationTimestamp="2024-06-25 16:28:46 +0000 UTC" firstStartedPulling="2024-06-25 16:29:11.101881106 +0000 UTC m=+47.143192731" lastFinishedPulling="2024-06-25 16:29:18.568724914 +0000 UTC m=+54.610036541" observedRunningTime="2024-06-25 16:29:19.751450976 +0000 UTC m=+55.792762629" watchObservedRunningTime="2024-06-25 16:29:45.656175609 +0000 UTC m=+81.697487249" Jun 25 16:29:49.085384 systemd[1]: Started sshd@16-164.92.91.188:22-139.178.89.65:54732.service - OpenSSH per-connection server daemon (139.178.89.65:54732). Jun 25 16:29:49.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.92.91.188:22-139.178.89.65:54732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:49.087495 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:29:49.087656 kernel: audit: type=1130 audit(1719332989.085:732): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.92.91.188:22-139.178.89.65:54732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:49.151000 audit[4894]: USER_ACCT pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.153211 sshd[4894]: Accepted publickey for core from 139.178.89.65 port 54732 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:49.159204 kernel: audit: type=1101 audit(1719332989.151:733): pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.162000 audit[4894]: CRED_ACQ pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.164558 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:49.169096 kernel: audit: type=1103 audit(1719332989.162:734): pid=4894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.169259 kernel: audit: type=1006 audit(1719332989.163:735): pid=4894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:29:49.174706 kernel: audit: type=1300 audit(1719332989.163:735): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7819dbe0 a2=3 a3=7f7fc71c5480 items=0 ppid=1 pid=4894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:49.163000 audit[4894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7819dbe0 a2=3 a3=7f7fc71c5480 items=0 ppid=1 pid=4894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:49.176119 systemd-logind[1272]: New session 17 of user core. Jun 25 16:29:49.185221 kernel: audit: type=1327 audit(1719332989.163:735): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:49.163000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:49.183401 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:29:49.194000 audit[4894]: USER_START pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.202021 kernel: audit: type=1105 audit(1719332989.194:736): pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.205000 audit[4896]: CRED_ACQ pid=4896 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.215134 kernel: audit: type=1103 audit(1719332989.205:737): pid=4896 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.475773 sshd[4894]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:49.479000 audit[4894]: USER_END pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.486089 kernel: audit: type=1106 audit(1719332989.479:738): pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.479000 audit[4894]: CRED_DISP pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.492065 systemd[1]: sshd@16-164.92.91.188:22-139.178.89.65:54732.service: Deactivated successfully. Jun 25 16:29:49.493987 kernel: audit: type=1104 audit(1719332989.479:739): pid=4894 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.493964 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:29:49.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-164.92.91.188:22-139.178.89.65:54732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:49.498950 systemd-logind[1272]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:29:49.506898 systemd[1]: Started sshd@17-164.92.91.188:22-139.178.89.65:54748.service - OpenSSH per-connection server daemon (139.178.89.65:54748). Jun 25 16:29:49.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.92.91.188:22-139.178.89.65:54748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:49.511822 systemd-logind[1272]: Removed session 17. Jun 25 16:29:49.585307 sshd[4906]: Accepted publickey for core from 139.178.89.65 port 54748 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:49.584000 audit[4906]: USER_ACCT pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.586000 audit[4906]: CRED_ACQ pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.586000 audit[4906]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58267f50 a2=3 a3=7fb79e6c5480 items=0 ppid=1 pid=4906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:49.586000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:49.588583 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:49.598808 systemd-logind[1272]: New session 18 of user core. Jun 25 16:29:49.604398 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:29:49.616000 audit[4906]: USER_START pid=4906 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:49.619000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.162756 sshd[4906]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:50.168000 audit[4906]: USER_END pid=4906 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.174000 audit[4906]: CRED_DISP pid=4906 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.92.91.188:22-139.178.89.65:54750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:50.186967 systemd[1]: Started sshd@18-164.92.91.188:22-139.178.89.65:54750.service - OpenSSH per-connection server daemon (139.178.89.65:54750). Jun 25 16:29:50.190624 systemd[1]: sshd@17-164.92.91.188:22-139.178.89.65:54748.service: Deactivated successfully. Jun 25 16:29:50.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-164.92.91.188:22-139.178.89.65:54748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:50.192548 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:29:50.198849 systemd-logind[1272]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:29:50.200827 systemd-logind[1272]: Removed session 18. Jun 25 16:29:50.293000 audit[4916]: USER_ACCT pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.294701 sshd[4916]: Accepted publickey for core from 139.178.89.65 port 54750 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:50.296000 audit[4916]: CRED_ACQ pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.296000 audit[4916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe41bd8ec0 a2=3 a3=7ff814a43480 items=0 ppid=1 pid=4916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:50.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:50.306462 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:50.328983 systemd-logind[1272]: New session 19 of user core. Jun 25 16:29:50.335982 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:29:50.349000 audit[4916]: USER_START pid=4916 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:50.353000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:51.976003 sshd[4916]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:51.978000 audit[4916]: USER_END pid=4916 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:51.979000 audit[4916]: CRED_DISP pid=4916 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:51.989314 systemd[1]: sshd@18-164.92.91.188:22-139.178.89.65:54750.service: Deactivated successfully. Jun 25 16:29:51.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-164.92.91.188:22-139.178.89.65:54750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:51.990882 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:29:51.992183 systemd-logind[1272]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:29:52.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.92.91.188:22-139.178.89.65:54752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:52.002362 systemd[1]: Started sshd@19-164.92.91.188:22-139.178.89.65:54752.service - OpenSSH per-connection server daemon (139.178.89.65:54752). Jun 25 16:29:52.006321 systemd-logind[1272]: Removed session 19. Jun 25 16:29:52.016000 audit[4929]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.016000 audit[4929]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe84204d80 a2=0 a3=7ffe84204d6c items=0 ppid=2413 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:52.018000 audit[4929]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.018000 audit[4929]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe84204d80 a2=0 a3=0 items=0 ppid=2413 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:52.070000 audit[4932]: USER_ACCT pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:52.071408 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 54752 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:52.072000 audit[4932]: CRED_ACQ pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:52.073000 audit[4932]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9ba9aa00 a2=3 a3=7f8a91261480 items=0 ppid=1 pid=4932 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.073000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:52.074929 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:52.083258 systemd-logind[1272]: New session 20 of user core. Jun 25 16:29:52.088299 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:29:52.096000 audit[4932]: USER_START pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:52.099000 audit[4935]: CRED_ACQ pid=4935 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:52.114000 audit[4936]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.114000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffab6efc20 a2=0 a3=7fffab6efc0c items=0 ppid=2413 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:52.116000 audit[4936]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:52.116000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffab6efc20 a2=0 a3=0 items=0 ppid=2413 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:52.116000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:53.048293 sshd[4932]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:53.049000 audit[4932]: USER_END pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.049000 audit[4932]: CRED_DISP pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-164.92.91.188:22-139.178.89.65:54752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:53.057702 systemd[1]: sshd@19-164.92.91.188:22-139.178.89.65:54752.service: Deactivated successfully. Jun 25 16:29:53.059112 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:29:53.062005 systemd-logind[1272]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:29:53.069798 systemd[1]: Started sshd@20-164.92.91.188:22-139.178.89.65:54756.service - OpenSSH per-connection server daemon (139.178.89.65:54756). Jun 25 16:29:53.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.92.91.188:22-139.178.89.65:54756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:53.076034 systemd-logind[1272]: Removed session 20. Jun 25 16:29:53.149000 audit[4944]: USER_ACCT pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.150423 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 54756 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:53.166000 audit[4944]: CRED_ACQ pid=4944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.166000 audit[4944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbe50e420 a2=3 a3=7fa7d2117480 items=0 ppid=1 pid=4944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:53.166000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:53.182439 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:53.194074 systemd-logind[1272]: New session 21 of user core. Jun 25 16:29:53.203705 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:29:53.213000 audit[4944]: USER_START pid=4944 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.216000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.427552 sshd[4944]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:53.428000 audit[4944]: USER_END pid=4944 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.429000 audit[4944]: CRED_DISP pid=4944 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:53.433162 systemd[1]: sshd@20-164.92.91.188:22-139.178.89.65:54756.service: Deactivated successfully. Jun 25 16:29:53.434185 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:29:53.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-164.92.91.188:22-139.178.89.65:54756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:53.435539 systemd-logind[1272]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:29:53.437124 systemd-logind[1272]: Removed session 21. Jun 25 16:29:58.454985 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:29:58.455280 kernel: audit: type=1130 audit(1719332998.452:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.91.188:22-139.178.89.65:33972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:58.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.91.188:22-139.178.89.65:33972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:58.452766 systemd[1]: Started sshd@21-164.92.91.188:22-139.178.89.65:33972.service - OpenSSH per-connection server daemon (139.178.89.65:33972). Jun 25 16:29:58.504000 audit[4968]: USER_ACCT pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.505954 sshd[4968]: Accepted publickey for core from 139.178.89.65 port 33972 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:29:58.511195 kernel: audit: type=1101 audit(1719332998.504:782): pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.513956 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:29:58.512000 audit[4968]: CRED_ACQ pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.519268 kernel: audit: type=1103 audit(1719332998.512:783): pid=4968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.519926 kernel: audit: type=1006 audit(1719332998.512:784): pid=4968 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 16:29:58.512000 audit[4968]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf8135000 a2=3 a3=7f335584e480 items=0 ppid=1 pid=4968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:58.528046 kernel: audit: type=1300 audit(1719332998.512:784): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf8135000 a2=3 a3=7f335584e480 items=0 ppid=1 pid=4968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:58.512000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:58.531153 kernel: audit: type=1327 audit(1719332998.512:784): proctitle=737368643A20636F7265205B707269765D Jun 25 16:29:58.539172 systemd-logind[1272]: New session 22 of user core. Jun 25 16:29:58.548321 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:29:58.566251 kernel: audit: type=1105 audit(1719332998.557:785): pid=4968 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.557000 audit[4968]: USER_START pid=4968 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.573492 kernel: audit: type=1103 audit(1719332998.566:786): pid=4975 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.566000 audit[4975]: CRED_ACQ pid=4975 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.591100 systemd[1]: run-containerd-runc-k8s.io-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c-runc.2TWzdE.mount: Deactivated successfully. Jun 25 16:29:58.716000 audit[4996]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:58.721062 kernel: audit: type=1325 audit(1719332998.716:787): table=filter:117 family=2 entries=20 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:58.716000 audit[4996]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcc23241d0 a2=0 a3=7ffcc23241bc items=0 ppid=2413 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:58.729247 kernel: audit: type=1300 audit(1719332998.716:787): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcc23241d0 a2=0 a3=7ffcc23241bc items=0 ppid=2413 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:58.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:58.737000 audit[4996]: NETFILTER_CFG table=nat:118 family=2 entries=104 op=nft_register_chain pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:29:58.737000 audit[4996]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcc23241d0 a2=0 a3=7ffcc23241bc items=0 ppid=2413 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:29:58.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:29:58.849866 sshd[4968]: pam_unix(sshd:session): session closed for user core Jun 25 16:29:58.854000 audit[4968]: USER_END pid=4968 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.854000 audit[4968]: CRED_DISP pid=4968 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:29:58.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-164.92.91.188:22-139.178.89.65:33972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:29:58.860501 systemd[1]: sshd@21-164.92.91.188:22-139.178.89.65:33972.service: Deactivated successfully. Jun 25 16:29:58.861766 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:29:58.864506 systemd-logind[1272]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:29:58.866642 systemd-logind[1272]: Removed session 22. Jun 25 16:30:01.223721 kubelet[2237]: I0625 16:30:01.223658 2237 topology_manager.go:215] "Topology Admit Handler" podUID="493a3ac2-a5d6-4efc-90c7-0e82b9a144b7" podNamespace="calico-apiserver" podName="calico-apiserver-586457776b-ftfcf" Jun 25 16:30:01.257174 systemd[1]: Created slice kubepods-besteffort-pod493a3ac2_a5d6_4efc_90c7_0e82b9a144b7.slice - libcontainer container kubepods-besteffort-pod493a3ac2_a5d6_4efc_90c7_0e82b9a144b7.slice. Jun 25 16:30:01.258000 audit[5000]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:01.258000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdc7b01250 a2=0 a3=7ffdc7b0123c items=0 ppid=2413 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:01.263000 audit[5000]: NETFILTER_CFG table=nat:120 family=2 entries=44 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:01.263000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffdc7b01250 a2=0 a3=7ffdc7b0123c items=0 ppid=2413 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:01.309000 audit[5002]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:01.309000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe6926db20 a2=0 a3=7ffe6926db0c items=0 ppid=2413 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:01.326000 audit[5002]: NETFILTER_CFG table=nat:122 family=2 entries=44 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:01.326000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe6926db20 a2=0 a3=7ffe6926db0c items=0 ppid=2413 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:01.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:01.367763 kubelet[2237]: I0625 16:30:01.367475 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/493a3ac2-a5d6-4efc-90c7-0e82b9a144b7-calico-apiserver-certs\") pod \"calico-apiserver-586457776b-ftfcf\" (UID: \"493a3ac2-a5d6-4efc-90c7-0e82b9a144b7\") " pod="calico-apiserver/calico-apiserver-586457776b-ftfcf" Jun 25 16:30:01.367763 kubelet[2237]: I0625 16:30:01.367565 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zx28\" (UniqueName: \"kubernetes.io/projected/493a3ac2-a5d6-4efc-90c7-0e82b9a144b7-kube-api-access-6zx28\") pod \"calico-apiserver-586457776b-ftfcf\" (UID: \"493a3ac2-a5d6-4efc-90c7-0e82b9a144b7\") " pod="calico-apiserver/calico-apiserver-586457776b-ftfcf" Jun 25 16:30:01.476840 kubelet[2237]: E0625 16:30:01.476668 2237 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 16:30:01.489323 kubelet[2237]: E0625 16:30:01.489261 2237 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/493a3ac2-a5d6-4efc-90c7-0e82b9a144b7-calico-apiserver-certs podName:493a3ac2-a5d6-4efc-90c7-0e82b9a144b7 nodeName:}" failed. No retries permitted until 2024-06-25 16:30:01.977259457 +0000 UTC m=+98.018571104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/493a3ac2-a5d6-4efc-90c7-0e82b9a144b7-calico-apiserver-certs") pod "calico-apiserver-586457776b-ftfcf" (UID: "493a3ac2-a5d6-4efc-90c7-0e82b9a144b7") : secret "calico-apiserver-certs" not found Jun 25 16:30:02.169210 containerd[1279]: time="2024-06-25T16:30:02.168488798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586457776b-ftfcf,Uid:493a3ac2-a5d6-4efc-90c7-0e82b9a144b7,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:30:02.688898 systemd-networkd[1093]: cali4f3c4337f3d: Link UP Jun 25 16:30:02.698741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:30:02.699008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4f3c4337f3d: link becomes ready Jun 25 16:30:02.699379 systemd-networkd[1093]: cali4f3c4337f3d: Gained carrier Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.377 [INFO][5009] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0 calico-apiserver-586457776b- calico-apiserver 493a3ac2-a5d6-4efc-90c7-0e82b9a144b7 1246 0 2024-06-25 16:30:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586457776b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-1561673ea7 calico-apiserver-586457776b-ftfcf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f3c4337f3d [] []}} ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.377 [INFO][5009] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.452 [INFO][5016] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" HandleID="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.499 [INFO][5016] ipam_plugin.go 264: Auto assigning IP ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" HandleID="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003591e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-1561673ea7", "pod":"calico-apiserver-586457776b-ftfcf", "timestamp":"2024-06-25 16:30:02.452710619 +0000 UTC"}, Hostname:"ci-3815.2.4-a-1561673ea7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.499 [INFO][5016] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.499 [INFO][5016] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.499 [INFO][5016] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-1561673ea7' Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.520 [INFO][5016] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.533 [INFO][5016] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.569 [INFO][5016] ipam.go 489: Trying affinity for 192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.591 [INFO][5016] ipam.go 155: Attempting to load block cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.620 [INFO][5016] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.94.0/26 host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.625 [INFO][5016] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.94.0/26 handle="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.635 [INFO][5016] ipam.go 1685: Creating new handle: k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.647 [INFO][5016] ipam.go 1203: Writing block in order to claim IPs block=192.168.94.0/26 handle="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.671 [INFO][5016] ipam.go 1216: Successfully claimed IPs: [192.168.94.5/26] block=192.168.94.0/26 handle="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.671 [INFO][5016] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.94.5/26] handle="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" host="ci-3815.2.4-a-1561673ea7" Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.671 [INFO][5016] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:30:02.747889 containerd[1279]: 2024-06-25 16:30:02.671 [INFO][5016] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.94.5/26] IPv6=[] ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" HandleID="k8s-pod-network.cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Workload="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.675 [INFO][5009] k8s.go 386: Populated endpoint ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0", GenerateName:"calico-apiserver-586457776b-", Namespace:"calico-apiserver", SelfLink:"", UID:"493a3ac2-a5d6-4efc-90c7-0e82b9a144b7", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586457776b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"", Pod:"calico-apiserver-586457776b-ftfcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3c4337f3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.675 [INFO][5009] k8s.go 387: Calico CNI using IPs: [192.168.94.5/32] ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.675 [INFO][5009] dataplane_linux.go 68: Setting the host side veth name to cali4f3c4337f3d ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.704 [INFO][5009] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.705 [INFO][5009] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0", GenerateName:"calico-apiserver-586457776b-", Namespace:"calico-apiserver", SelfLink:"", UID:"493a3ac2-a5d6-4efc-90c7-0e82b9a144b7", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 30, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586457776b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-1561673ea7", ContainerID:"cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca", Pod:"calico-apiserver-586457776b-ftfcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3c4337f3d", MAC:"6e:52:1b:1c:65:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:30:02.748965 containerd[1279]: 2024-06-25 16:30:02.740 [INFO][5009] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca" Namespace="calico-apiserver" Pod="calico-apiserver-586457776b-ftfcf" WorkloadEndpoint="ci--3815.2.4--a--1561673ea7-k8s-calico--apiserver--586457776b--ftfcf-eth0" Jun 25 16:30:02.800522 containerd[1279]: time="2024-06-25T16:30:02.800378415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:30:02.800760 containerd[1279]: time="2024-06-25T16:30:02.800544644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:02.800760 containerd[1279]: time="2024-06-25T16:30:02.800695974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:30:02.800872 containerd[1279]: time="2024-06-25T16:30:02.800779398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:30:02.841286 systemd[1]: Started cri-containerd-cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca.scope - libcontainer container cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca. Jun 25 16:30:02.881822 systemd[1]: run-containerd-runc-k8s.io-cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca-runc.P2Oc8f.mount: Deactivated successfully. Jun 25 16:30:02.911000 audit[5066]: NETFILTER_CFG table=filter:123 family=2 entries=55 op=nft_register_chain pid=5066 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:30:02.911000 audit[5066]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffe02600870 a2=0 a3=7ffe0260085c items=0 ppid=4193 pid=5066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:02.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:30:02.911000 audit: BPF prog-id=191 op=LOAD Jun 25 16:30:02.912000 audit: BPF prog-id=192 op=LOAD Jun 25 16:30:02.912000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:02.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366353637663564336565363830383962353666666166306134346635 Jun 25 16:30:02.913000 audit: BPF prog-id=193 op=LOAD Jun 25 16:30:02.913000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:02.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366353637663564336565363830383962353666666166306134346635 Jun 25 16:30:02.913000 audit: BPF prog-id=193 op=UNLOAD Jun 25 16:30:02.913000 audit: BPF prog-id=192 op=UNLOAD Jun 25 16:30:02.913000 audit: BPF prog-id=194 op=LOAD Jun 25 16:30:02.913000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:02.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366353637663564336565363830383962353666666166306134346635 Jun 25 16:30:02.971211 containerd[1279]: time="2024-06-25T16:30:02.970954048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586457776b-ftfcf,Uid:493a3ac2-a5d6-4efc-90c7-0e82b9a144b7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca\"" Jun 25 16:30:02.988907 containerd[1279]: time="2024-06-25T16:30:02.988841583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:30:03.870200 systemd[1]: Started sshd@22-164.92.91.188:22-139.178.89.65:33984.service - OpenSSH per-connection server daemon (139.178.89.65:33984). Jun 25 16:30:03.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.91.188:22-139.178.89.65:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:03.873616 kernel: kauditd_printk_skb: 34 callbacks suppressed Jun 25 16:30:03.873824 kernel: audit: type=1130 audit(1719333003.868:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.91.188:22-139.178.89.65:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:03.927496 systemd-networkd[1093]: cali4f3c4337f3d: Gained IPv6LL Jun 25 16:30:04.022783 sshd[5079]: Accepted publickey for core from 139.178.89.65 port 33984 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:30:04.031613 kernel: audit: type=1101 audit(1719333004.021:804): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.031967 kernel: audit: type=1103 audit(1719333004.024:805): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.021000 audit[5079]: USER_ACCT pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.024000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.032638 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:04.034223 kernel: audit: type=1006 audit(1719333004.024:806): pid=5079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 16:30:04.034338 kernel: audit: type=1300 audit(1719333004.024:806): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe071ddef0 a2=3 a3=7fef15c2f480 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:04.024000 audit[5079]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe071ddef0 a2=3 a3=7fef15c2f480 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:04.042025 kernel: audit: type=1327 audit(1719333004.024:806): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:04.024000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:04.048986 systemd-logind[1272]: New session 23 of user core. Jun 25 16:30:04.056307 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:30:04.078887 kernel: audit: type=1105 audit(1719333004.068:807): pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.068000 audit[5079]: USER_START pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.084048 kernel: audit: type=1103 audit(1719333004.079:808): pid=5081 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.079000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.652667 sshd[5079]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:04.657000 audit[5079]: USER_END pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.658000 audit[5079]: CRED_DISP pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.662545 systemd-logind[1272]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:30:04.666359 kernel: audit: type=1106 audit(1719333004.657:809): pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.666504 kernel: audit: type=1104 audit(1719333004.658:810): pid=5079 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:04.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-164.92.91.188:22-139.178.89.65:33984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:04.664767 systemd[1]: sshd@22-164.92.91.188:22-139.178.89.65:33984.service: Deactivated successfully. Jun 25 16:30:04.665789 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:30:04.668506 systemd-logind[1272]: Removed session 23. Jun 25 16:30:06.417644 systemd[1]: run-containerd-runc-k8s.io-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c-runc.61vNH4.mount: Deactivated successfully. Jun 25 16:30:08.976785 containerd[1279]: time="2024-06-25T16:30:08.976703662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:08.980544 containerd[1279]: time="2024-06-25T16:30:08.980462226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:30:08.985929 containerd[1279]: time="2024-06-25T16:30:08.985850934Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:08.992201 containerd[1279]: time="2024-06-25T16:30:08.992135166Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:08.998105 containerd[1279]: time="2024-06-25T16:30:08.998043458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:30:09.001588 containerd[1279]: time="2024-06-25T16:30:09.001362623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 6.012149413s" Jun 25 16:30:09.001588 containerd[1279]: time="2024-06-25T16:30:09.001461065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:30:09.009517 containerd[1279]: time="2024-06-25T16:30:09.009455720Z" level=info msg="CreateContainer within sandbox \"cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:30:09.033661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416725867.mount: Deactivated successfully. Jun 25 16:30:09.046974 containerd[1279]: time="2024-06-25T16:30:09.046835833Z" level=info msg="CreateContainer within sandbox \"cf567f5d3ee68089b56ffaf0a44f5010ba11c1c14105863d12b809fe19605cca\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826\"" Jun 25 16:30:09.050318 containerd[1279]: time="2024-06-25T16:30:09.048787590Z" level=info msg="StartContainer for \"2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826\"" Jun 25 16:30:09.125277 systemd[1]: run-containerd-runc-k8s.io-2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826-runc.mGnYEr.mount: Deactivated successfully. Jun 25 16:30:09.137590 systemd[1]: Started cri-containerd-2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826.scope - libcontainer container 2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826. Jun 25 16:30:09.167000 audit: BPF prog-id=195 op=LOAD Jun 25 16:30:09.169065 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:09.169133 kernel: audit: type=1334 audit(1719333009.167:812): prog-id=195 op=LOAD Jun 25 16:30:09.169000 audit: BPF prog-id=196 op=LOAD Jun 25 16:30:09.171984 kernel: audit: type=1334 audit(1719333009.169:813): prog-id=196 op=LOAD Jun 25 16:30:09.172078 kernel: audit: type=1300 audit(1719333009.169:813): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5044 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.169000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=5044 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636331353036363537633834376237396238386136613733353935 Jun 25 16:30:09.182221 kernel: audit: type=1327 audit(1719333009.169:813): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636331353036363537633834376237396238386136613733353935 Jun 25 16:30:09.169000 audit: BPF prog-id=197 op=LOAD Jun 25 16:30:09.186152 kernel: audit: type=1334 audit(1719333009.169:814): prog-id=197 op=LOAD Jun 25 16:30:09.186359 kernel: audit: type=1300 audit(1719333009.169:814): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5044 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.169000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=5044 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.191558 kernel: audit: type=1327 audit(1719333009.169:814): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636331353036363537633834376237396238386136613733353935 Jun 25 16:30:09.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636331353036363537633834376237396238386136613733353935 Jun 25 16:30:09.198444 kernel: audit: type=1334 audit(1719333009.169:815): prog-id=197 op=UNLOAD Jun 25 16:30:09.169000 audit: BPF prog-id=197 op=UNLOAD Jun 25 16:30:09.169000 audit: BPF prog-id=196 op=UNLOAD Jun 25 16:30:09.203038 kernel: audit: type=1334 audit(1719333009.169:816): prog-id=196 op=UNLOAD Jun 25 16:30:09.169000 audit: BPF prog-id=198 op=LOAD Jun 25 16:30:09.205000 kernel: audit: type=1334 audit(1719333009.169:817): prog-id=198 op=LOAD Jun 25 16:30:09.169000 audit[5133]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=5044 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261636331353036363537633834376237396238386136613733353935 Jun 25 16:30:09.260854 containerd[1279]: time="2024-06-25T16:30:09.259645691Z" level=info msg="StartContainer for \"2acc1506657c847b79b88a6a73595dca247c2bbfeebe8bf1710eb95c0db7d826\" returns successfully" Jun 25 16:30:09.673746 systemd[1]: Started sshd@23-164.92.91.188:22-139.178.89.65:55628.service - OpenSSH per-connection server daemon (139.178.89.65:55628). Jun 25 16:30:09.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-164.92.91.188:22-139.178.89.65:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:09.753709 sshd[5167]: Accepted publickey for core from 139.178.89.65 port 55628 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:30:09.752000 audit[5167]: USER_ACCT pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:09.754000 audit[5167]: CRED_ACQ pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:09.754000 audit[5167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4a357310 a2=3 a3=7f20201be480 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.754000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:09.757589 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:09.770320 systemd-logind[1272]: New session 24 of user core. Jun 25 16:30:09.774281 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:30:09.782000 audit[5167]: USER_START pid=5167 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:09.785000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:09.796000 audit[5171]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:09.796000 audit[5171]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc8edadc40 a2=0 a3=7ffc8edadc2c items=0 ppid=2413 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:09.801000 audit[5171]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:09.801000 audit[5171]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffc8edadc40 a2=0 a3=7ffc8edadc2c items=0 ppid=2413 pid=5171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:09.971000 audit[5179]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:09.971000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe62721090 a2=0 a3=7ffe6272107c items=0 ppid=2413 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:09.973000 audit[5179]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:09.973000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe62721090 a2=0 a3=7ffe6272107c items=0 ppid=2413 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:09.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:10.203306 kubelet[2237]: E0625 16:30:10.203255 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:30:10.536116 sshd[5167]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:10.536000 audit[5167]: USER_END pid=5167 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.537000 audit[5167]: CRED_DISP pid=5167 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:10.540852 systemd[1]: sshd@23-164.92.91.188:22-139.178.89.65:55628.service: Deactivated successfully. Jun 25 16:30:10.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-164.92.91.188:22-139.178.89.65:55628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:10.542543 systemd-logind[1272]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:30:10.543716 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:30:10.545999 systemd-logind[1272]: Removed session 24. Jun 25 16:30:10.906679 kubelet[2237]: I0625 16:30:10.906512 2237 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-586457776b-ftfcf" podStartSLOduration=3.890807353 podCreationTimestamp="2024-06-25 16:30:01 +0000 UTC" firstStartedPulling="2024-06-25 16:30:02.9862896 +0000 UTC m=+99.027601218" lastFinishedPulling="2024-06-25 16:30:09.001930406 +0000 UTC m=+105.043242042" observedRunningTime="2024-06-25 16:30:09.919758613 +0000 UTC m=+105.961070256" watchObservedRunningTime="2024-06-25 16:30:10.906448177 +0000 UTC m=+106.947759820" Jun 25 16:30:10.944000 audit[5183]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:10.944000 audit[5183]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffc9065800 a2=0 a3=7fffc90657ec items=0 ppid=2413 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:10.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:10.947000 audit[5183]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=5183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:10.947000 audit[5183]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7fffc9065800 a2=0 a3=7fffc90657ec items=0 ppid=2413 pid=5183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:10.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:11.967000 audit[5185]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:11.967000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe7f098b10 a2=0 a3=7ffe7f098afc items=0 ppid=2413 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:11.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:11.970000 audit[5185]: NETFILTER_CFG table=nat:131 family=2 entries=58 op=nft_register_chain pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:30:11.970000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffe7f098b10 a2=0 a3=7ffe7f098afc items=0 ppid=2413 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:11.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:30:15.567149 kernel: kauditd_printk_skb: 37 callbacks suppressed Jun 25 16:30:15.567540 kernel: audit: type=1130 audit(1719333015.556:835): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-164.92.91.188:22-139.178.89.65:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:15.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-164.92.91.188:22-139.178.89.65:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:15.557465 systemd[1]: Started sshd@24-164.92.91.188:22-139.178.89.65:55630.service - OpenSSH per-connection server daemon (139.178.89.65:55630). Jun 25 16:30:15.712000 audit[5191]: USER_ACCT pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.718351 kernel: audit: type=1101 audit(1719333015.712:836): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.726547 kernel: audit: type=1103 audit(1719333015.719:837): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.719000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.726877 sshd[5191]: Accepted publickey for core from 139.178.89.65 port 55630 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:30:15.732834 kernel: audit: type=1006 audit(1719333015.726:838): pid=5191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:30:15.739808 kernel: audit: type=1300 audit(1719333015.726:838): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4c03c300 a2=3 a3=7f744212c480 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:15.726000 audit[5191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4c03c300 a2=3 a3=7f744212c480 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:15.740688 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:15.745391 kernel: audit: type=1327 audit(1719333015.726:838): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:15.726000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:15.754645 systemd-logind[1272]: New session 25 of user core. Jun 25 16:30:15.763319 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:30:15.782858 kernel: audit: type=1105 audit(1719333015.772:839): pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.772000 audit[5191]: USER_START pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.781000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:15.789100 kernel: audit: type=1103 audit(1719333015.781:840): pid=5210 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.058594 sshd[5191]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:16.059000 audit[5191]: USER_END pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.069117 kernel: audit: type=1106 audit(1719333016.059:841): pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.069706 systemd[1]: sshd@24-164.92.91.188:22-139.178.89.65:55630.service: Deactivated successfully. Jun 25 16:30:16.071184 systemd-logind[1272]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:30:16.071920 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:30:16.059000 audit[5191]: CRED_DISP pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.081615 kernel: audit: type=1104 audit(1719333016.059:842): pid=5191 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:16.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-164.92.91.188:22-139.178.89.65:55630 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:16.082710 systemd-logind[1272]: Removed session 25. Jun 25 16:30:17.509000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:17.509000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001f2f770 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:30:17.509000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:17.509000 audit[2083]: AVC avc: denied { watch } for pid=2083 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c513,c1007 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:17.509000 audit[2083]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0024bfc20 a2=fc6 a3=0 items=0 ppid=1948 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c513,c1007 key=(null) Jun 25 16:30:17.509000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:30:19.203642 kubelet[2237]: E0625 16:30:19.203595 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:30:19.543000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.543000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c0121d8d00 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.543000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:19.550000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=524883 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.550000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c012624fc0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.550000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:19.550000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=524877 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.550000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c011c53200 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.550000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:19.551000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.551000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c012624ff0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.551000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:19.563000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=524881 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.563000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=76 a1=c0126251a0 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.563000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:19.563000 audit[2130]: AVC avc: denied { watch } for pid=2130 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=524866 scontext=system_u:system_r:container_t:s0:c364,c831 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:30:19.563000 audit[2130]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=77 a1=c0121d8f20 a2=fc6 a3=0 items=0 ppid=1944 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c364,c831 key=(null) Jun 25 16:30:19.563000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3136342E39322E39312E313838002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 16:30:21.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-164.92.91.188:22-139.178.89.65:59514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.078324 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 16:30:21.078423 kernel: audit: type=1130 audit(1719333021.075:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-164.92.91.188:22-139.178.89.65:59514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.075924 systemd[1]: Started sshd@25-164.92.91.188:22-139.178.89.65:59514.service - OpenSSH per-connection server daemon (139.178.89.65:59514). Jun 25 16:30:21.130000 audit[5230]: USER_ACCT pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.134128 sshd[5230]: Accepted publickey for core from 139.178.89.65 port 59514 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:30:21.137072 kernel: audit: type=1101 audit(1719333021.130:853): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.136000 audit[5230]: CRED_ACQ pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.138139 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:21.143367 kernel: audit: type=1103 audit(1719333021.136:854): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.143525 kernel: audit: type=1006 audit(1719333021.136:855): pid=5230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 16:30:21.136000 audit[5230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc156d7d40 a2=3 a3=7f0d70a62480 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:21.157857 kernel: audit: type=1300 audit(1719333021.136:855): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc156d7d40 a2=3 a3=7f0d70a62480 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:21.159258 kernel: audit: type=1327 audit(1719333021.136:855): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:21.136000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:21.158337 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:30:21.160932 systemd-logind[1272]: New session 26 of user core. Jun 25 16:30:21.169000 audit[5230]: USER_START pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.176005 kernel: audit: type=1105 audit(1719333021.169:856): pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.178000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.187154 kernel: audit: type=1103 audit(1719333021.178:857): pid=5232 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.395834 sshd[5230]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:21.399000 audit[5230]: USER_END pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.406376 kernel: audit: type=1106 audit(1719333021.399:858): pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.407136 systemd[1]: sshd@25-164.92.91.188:22-139.178.89.65:59514.service: Deactivated successfully. Jun 25 16:30:21.408474 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:30:21.403000 audit[5230]: CRED_DISP pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.415091 kernel: audit: type=1104 audit(1719333021.403:859): pid=5230 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:21.416797 systemd-logind[1272]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:30:21.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-164.92.91.188:22-139.178.89.65:59514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:21.418577 systemd-logind[1272]: Removed session 26. Jun 25 16:30:25.203407 kubelet[2237]: E0625 16:30:25.203351 2237 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jun 25 16:30:26.428801 systemd[1]: Started sshd@26-164.92.91.188:22-139.178.89.65:43056.service - OpenSSH per-connection server daemon (139.178.89.65:43056). Jun 25 16:30:26.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-164.92.91.188:22-139.178.89.65:43056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:26.431526 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:30:26.431708 kernel: audit: type=1130 audit(1719333026.428:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-164.92.91.188:22-139.178.89.65:43056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:26.480000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.482160 sshd[5247]: Accepted publickey for core from 139.178.89.65 port 43056 ssh2: RSA SHA256:dnARpgcqKAi8zw13ALMCzNEqvQgfn3eJ2P1ura6t/RA Jun 25 16:30:26.485159 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:30:26.487207 kernel: audit: type=1101 audit(1719333026.480:862): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.487418 kernel: audit: type=1103 audit(1719333026.483:863): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.483000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.497065 kernel: audit: type=1006 audit(1719333026.483:864): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:30:26.483000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf3a80240 a2=3 a3=7f02c817f480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:26.503214 kernel: audit: type=1300 audit(1719333026.483:864): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf3a80240 a2=3 a3=7f02c817f480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:30:26.483000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:26.506197 kernel: audit: type=1327 audit(1719333026.483:864): proctitle=737368643A20636F7265205B707269765D Jun 25 16:30:26.511578 systemd-logind[1272]: New session 27 of user core. Jun 25 16:30:26.515553 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:30:26.521000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.527979 kernel: audit: type=1105 audit(1719333026.521:865): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.524000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.533030 kernel: audit: type=1103 audit(1719333026.524:866): pid=5249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.682762 sshd[5247]: pam_unix(sshd:session): session closed for user core Jun 25 16:30:26.684000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.689985 kernel: audit: type=1106 audit(1719333026.684:867): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.684000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.692105 systemd-logind[1272]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:30:26.696311 kernel: audit: type=1104 audit(1719333026.684:868): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jun 25 16:30:26.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-164.92.91.188:22-139.178.89.65:43056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:30:26.694104 systemd[1]: sshd@26-164.92.91.188:22-139.178.89.65:43056.service: Deactivated successfully. Jun 25 16:30:26.695232 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:30:26.697815 systemd-logind[1272]: Removed session 27. Jun 25 16:30:28.561394 systemd[1]: run-containerd-runc-k8s.io-b410cac96b327a22aa80943f556cfcf031d9073379c6b00be1a178bb6a57823c-runc.cUjcEE.mount: Deactivated successfully.